Late on Sept. 24, chip giant Intel(NASDAQ: INTC) announced a new family of chips code-named Coffee Lake, targeted at the desktop personal-computer market. This volley of chips, which seem primarily aimed at the gaming and enthusiast portion of the desktop PC market, will become available for purchase on Oct. 5, Intel says. Here are three things you should probably know about these new chips. More cores per dollarThe key selling point of these new Coffee Lake processors compared with the previous generation Kaby Lake chips is that, even though they have more cores than their predecessors, pricing from the previous generation hasn't budged. The three chips that'll probably be the most interesting to the gaming and enthusiast community are the Core i7-8700K, Core i5-8600K, and the Core i3-8350K. They're unlocked, which means the speeds are user-adjustable. The Core i7-8700K has six-core and 12 hardware threads and is rated at a base frequency of 3.7GHz and a maximum single-core turbo of 4.7GHz. Intel says the recommended customer pricing for this chip is $359 -- just $20 more than what Intel wanted for the four-core and eight-thread Core i7-7700K. Moving down the stack, Intel's asking $257 for the Core i5-8600K. This, like the 8700K, is a six-core processor, but Intel's hyperthreading technology that allows one core to act as two is disabled in this part. The chip also has some cache memory disabled, with 9 MB active, compared with the 12 MB active on the 8700K, and runs at a lower frequency than the 8700K out of the box -- base frequency is 3.6GHz and maximum single-core turbo is 4.3GHz. Its predecessor, the 7600K, offered four cores and four threads for $242. So again, pricing goes a smidgen up, but in exchange, Intel's offering far more cores. And finally, at the bottom of the unlocked Coffee Lake processor stack is the Core i3-8350K. This chip has four cores and four threads, and Intel wants $168 for it. This chip succeeds the Core i3-7350K, which had two physical cores, each with hyperthreading enabled for a total of four logical cores. Pricing for this chip remains unchanged from the last generation. It's built using Intel's 14nm++ technologyAt Intel's Technology and Manufacturing Day earlier this year, the company disclosed that it planned to introduce a third-generation of its 14nm manufacturing technology, branded 14nm++, that delivers a roughly 10% performance improvement over its second-generation 14nm+ technology. These Coffee Lake chips are the first announced products to make use of Intel's 14nm++ technology. Intel doesn't appear to have pushed peak single-core frequency up much with the new chips, as the 8700K runs at a maximum single-core turbo of 4.7GHz, up 0.2GHz from the 4.5GHz maximum single-core turbo frequency of the 7700K. But the new manufacturing technology is probably a key enabler of Intel's ability to fit additional cores into these chips while keeping power consumption within acceptable levels. These chips won't work in older motherboardsAlongside these new chips, Intel announced a new platform controller hub chip known as Z370. Truth be told, Z370 is identical to the prior-generation Z270 chips in terms of features and capabilities. However, while the Z370 chipset should be identical in features to the Z270 chipset, Intel says the Z370-based motherboards that the Coffee Lake chips require to function will have several enhancements to support the increased peak power draw of these new chips, particularly when overclocked. As a result, anybody buying one of these new Coffee Lake chips will need to buy a motherboard with a 300-series chipset to pair with it. Right now, the only 300-series chipset Intel is releasing is Z370, which is aimed at PC enthusiasts, but early next year Intel is expected to launch a full stack of 300-series chipsets targeting cheaper boards and more mainstream use cases. The bad news for Intel is that some people who would've been happy to plop one of these Coffee Lake chips into their pre-existing Z170 or Z270 motherboards might just hold off on upgrading altogether, losing Intel a lucrative chip sale. The good news for Intel is that when a customer buys a Coffee Lake-based processor, it will be accompanied by the sale of a new motherboard. A motherboard purchase results in a chipset sale for Intel, as well as the possible sale of Ethernet controller, Wi-Fi chips, and even Thunderbolt controllers. In addition, Intel relies on its motherboard partners to develop compelling, feature-rich motherboards to sell alongside its processors. By mandating the sale of a new motherboard with each Coffee Lake chip sold, Intel's motherboard partners benefit, likely to make them more willing to continue to invest heavily in Intel's future platforms. 10 stocks we like better than Intel When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.* David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and Intel wasn't one of them! That's right -- they think these 10 stocks are even better buys. *Stock Advisor returns as of September 5, 2017
0 Comments
Late on Sept. 24, chip giant Intel (NASDAQ:INTC) announced a new family of chips code-named Coffee Lake, targeted at the desktop personal-computer market. This volley of chips, which seem primarily aimed at the gaming and enthusiast portion of the desktop PC market, will become available for purchase on Oct. 5, Intel says. Here are three things you should probably know about these new chips. More cores per dollarThe key selling point of these new Coffee Lake processors compared with the previous generation Kaby Lake chips is that, even though they have more cores than their predecessors, pricing from the previous generation hasn't budged. The three chips that'll probably be the most interesting to the gaming and enthusiast community are the Core i7-8700K, Core i5-8600K, and the Core i3-8350K. They're unlocked, which means the speeds are user-adjustable. The Core i7-8700K has six-core and 12 hardware threads and is rated at a base frequency of 3.7GHz and a maximum single-core turbo of 4.7GHz. Intel says the recommended customer pricing for this chip is $359 -- just $20 more than what Intel wanted for the four-core and eight-thread Core i7-7700K. Moving down the stack, Intel's asking $257 for the Core i5-8600K. This, like the 8700K, is a six-core processor, but Intel's hyperthreading technology that allows one core to act as two is disabled in this part. The chip also has some cache memory disabled, with 9 MB active, compared with the 12 MB active on the 8700K, and runs at a lower frequency than the 8700K out of the box -- base frequency is 3.6GHz and maximum single-core turbo is 4.3GHz. Its predecessor, the 7600K, offered four cores and four threads for $242. So again, pricing goes a smidgen up, but in exchange, Intel's offering far more cores. And finally, at the bottom of the unlocked Coffee Lake processor stack is the Core i3-8350K. This chip has four cores and four threads, and Intel wants $168 for it. This chip succeeds the Core i3-7350K, which had two physical cores, each with hyperthreading enabled for a total of four logical cores. Pricing for this chip remains unchanged from the last generation. It's built using Intel's 14nm++ technologyAt Intel's Technology and Manufacturing Day earlier this year, the company disclosed that it planned to introduce a third-generation of its 14nm manufacturing technology, branded 14nm++, that delivers a roughly 10% performance improvement over its second-generation 14nm+ technology. These Coffee Lake chips are the first announced products to make use of Intel's 14nm++ technology. Intel doesn't appear to have pushed peak single-core frequency up much with the new chips, as the 8700K runs at a maximum single-core turbo of 4.7GHz, up 0.2GHz from the 4.5GHz maximum single-core turbo frequency of the 7700K. But the new manufacturing technology is probably a key enabler of Intel's ability to fit additional cores into these chips while keeping power consumption within acceptable levels. These chips won't work in older motherboardsAlongside these new chips, Intel announced a new platform controller hub chip known as Z370. Truth be told, Z370 is identical to the prior-generation Z270 chips in terms of features and capabilities. However, while the Z370 chipset should be identical in features to the Z270 chipset, Intel says the Z370-based motherboards that the Coffee Lake chips require to function will have several enhancements to support the increased peak power draw of these new chips, particularly when overclocked. As a result, anybody buying one of these new Coffee Lake chips will need to buy a motherboard with a 300-series chipset to pair with it. Right now, the only 300-series chipset Intel is releasing is Z370, which is aimed at PC enthusiasts, but early next year Intel is expected to launch a full stack of 300-series chipsets targeting cheaper boards and more mainstream use cases. The bad news for Intel is that some people who would've been happy to plop one of these Coffee Lake chips into their pre-existing Z170 or Z270 motherboards might just hold off on upgrading altogether, losing Intel a lucrative chip sale. The good news for Intel is that when a customer buys a Coffee Lake-based processor, it will be accompanied by the sale of a new motherboard. A motherboard purchase results in a chipset sale for Intel, as well as the possible sale of Ethernet controller, Wi-Fi chips, and even Thunderbolt controllers. In addition, Intel relies on its motherboard partners to develop compelling, feature-rich motherboards to sell alongside its processors. By mandating the sale of a new motherboard with each Coffee Lake chip sold, Intel's motherboard partners benefit, likely to make them more willing to continue to invest heavily in Intel's future platforms. Read More 3 Things You Need to Know About Intel Corp.'s New Gaming Chips : http://ift.tt/2yz9pnK[unable to retrieve full-text content] Apple already does the most end-to-end work of any company building smartphones. It has complete control over the physical design, codes all the software, and designs the CPU at the heart of every phone. But it still relies heavily on outside companies to produce many of the critical components for its phones and laptops. That leads to differences in the quality of chips between different iPhones, and can also be a strategic problem when, say, one of your suppliers of baseband modems decides to sue you. That’s why today’s report from Nikkei Asian Review, which claims that Apple is “expanding efforts to develop proprietary semiconductors to better compete in artificial intelligence, reducing reliance on major suppliers such as Intel and Qualcomm,” makes such perfect sense. Nikkei says that “Apple is keen to expand its semiconductor capabilities further. They say the company is interested in building core processors for notebooks, modem chips for iPhones, and a chip that integrates touch, fingerprint and display driver functions.” Those are some of the components that have caused the most headaches for Apple. In this year’s iPhone 8 and 8 Plus, for example, Apple has different modems from Intel and Qualcomm in different versions of the phone. Qualcomm’s modem (on paper at least) is more capable than Intel’s, as it’s compatible with gigabit LTE technologies like LTE-U, LAA, and 4×4 MIMO. Intel’s modem is more limited, so to ensure “feature parity” between the different models, Apple has had to artificially restrict Qualcomm’s modem, and purposely make the new iPhones slower than their Android rivals. Qualcomm is also in an ongoing legal battle with Apple, but Apple’s reliance on the chip manufacturer for its iPhone supply restricts Apple’s legal strategy in the fight. By designing its own chips, Apple could ensure that it gets the best technology in every new device, and reduces its reliance on rival corporations that could be a strategic threat. Apple has already moved (via manufacturing partner Foxconn) to start producing its own screens going forward. It makes sense that as time passes, it will look down the list of vital components, and start trying to lock down the manufacturing process part by part. Read More Apple is tired of relying on Intel or Qualcomm for anything : http://ift.tt/2xHqo89Intel is planning to launch Cannon Lake before the end of the year, followed by Ice and Tiger Lake, all of which will presumably be built on a 10-nanometer manufacturing process. Looking even further down the road, however, Intel's website hints of a possible 7nm product called Sapphire Rapids. Sapphire Rapids will be Intel's 12th generation Core processor family. The company's just-released Coffee Lake line for the desktop is considered its 8th generation Core family, so Sapphire Rapids is several generations away. There is not a whole lot we know about Sapphire Rapids at this point. The only concrete information from Intel that is publicly available is that Sapphire Rapids will be part of the company's Tinsley server and workstation platform. It is reasonable to speculate that Intel's 12th generation Core processor will represent a move to 7nm. Intel has had trouble getting to 10nm as quickly as it had hoped, and while Samsung, MediaTek, and Qualcomm are already there, Intel claims its 10nm technology will be more advanced and a "full generation ahead" of others, in regards to transistor density and overall performance. The bigger race is between Intel and AMD, not Intel and mobile chip makers. A recently leaked roadmap points to AMD readying Zen 2 processors (Matisse) for a 2019 launch, and those are expected to be built on a 7nm FinFET process. For Intel, Sapphire Rapids will be a major architectural change. It could also bring about 8-core/16-thread processors to the mainstream, just as Coffee Lake is now bringing 6-core/12-thread CPUs to the mainstream for the first time ever. Prior to Coffee Lake, any processor with more than four cores was reserved for its high end desktop (HEDT) family. Read More Intel's 12th generation 'Sapphire Rapids' core architecture may arrive in 2020 : http://ift.tt/2fxzZq3The brain has long inspired the design of computers and their software. Now Intel has become the latest tech company to decide that mimicking the brain’s hardware could be the next stage in the evolution of computing. On Monday the company unveiled an experimental “neuromorphic” chip called Loihi. Neuromorphic chips are microprocessors whose architecture is configured to mimic the biological brain’s network of neurons and the connections between them called synapses. While neural networks—the in vogue approach to artificial intelligence and machine learning—are also inspired by the brain and use layers of virtual neurons, they are still implemented on conventional silicon hardware such as CPUs and GPUs. The main benefit of mimicking the architecture of the brain on a physical chip, say neuromorphic computing’s proponents, is energy efficiency—the human brain runs on roughly 20 watts. The “neurons” in neuromorphic chips carry out the role of both processor and memory which removes the need to shuttle data back and forth between separate units, which is how traditional chips work. Each neuron also only needs to be powered while it’s firing. [embedded content] At present, most machine learning is done in data centers due to the massive energy and computing requirements. Creating chips that capture some of nature’s efficiency could allow AI to be run directly on devices like smartphones, cars, and robots. This is exactly the kind of application Michael Mayberry, managing director of Intel’s research arm, touts in a blog post announcing Loihi. He talks about CCTV cameras that can run image recognition to identify missing persons or traffic lights that can track traffic flow to optimize timing and keep vehicles moving. There’s still a long way to go before that happens though. According to Wired, so far Intel has only been working with prototypes, and the first full-size version of the chip won’t be built until November. Once complete, it will feature 130,000 neurons and 130 million synaptic connections split between 128 computing cores. The device will be 1,000 times more energy-efficient than standard approaches, according to Mayberry, but more impressive are claims the chip will be capable of continuous learning. Normally deep learning works by training a neural network on giant datasets to create a model that can then be applied to new data. The Loihi chip will combine training and inference on the same chip, which will allow it to learn on the fly, constantly updating its models and adapting to changing circumstances without having to be deliberately re-trained. A select group of universities and research institutions will be the first to get their hands on the new chip in the first half of 2018, but Mayberry said it could be years before it’s commercially available. Whether commercialization happens at all may largely depend on whether early adopters can get the hardware to solve any practically useful problems. So far neuromorphic computing has struggled to gain traction outside the research community. IBM released a neuromorphic chip called TrueNorth in 2014, but the device has yet to showcase any commercially useful applications. Lee Gomes summarizes the hurdles facing neuromorphic computing excellently in IEEE Spectrum. One is that deep learning can run on very simple, low-precision hardware that can be optimized to use very little power, which suggests complicated new architectures may struggle to find purchase. It’s also not easy to transfer deep learning approaches developed on conventional chips over to neuromorphic hardware, and even Intel Labs chief scientist Narayan Srinivasa admitted to Forbes Loihi wouldn’t work well with some deep learning models. Finally, there’s considerable competition in the quest to develop new computer architectures specialized for machine learning. GPU vendors Nvidia and AMD have pivoted to take advantage of this newfound market and companies like Google and Microsoft are developing their own in-house solutions. Intel, for its part, isn’t putting all its eggs in one basket. Last year it bought two companies building chips for specialized machine learning—Movidius and Nervana—and this was followed up with the $15 billion purchase of self-driving car chip- and camera-maker Mobileye. And while the jury is still out on neuromorphic computing, it makes sense for a company eager to position itself as the AI chipmaker of the future to have its fingers in as many pies as possible. There are a growing number of voices suggesting that despite its undoubted power, deep learning alone will not allow us to imbue machines with the kind of adaptable, general intelligence humans possess. What new approaches will get us there are hard to predict, but it’s entirely possible they will only work on hardware that closely mimics the one device we already know is capable of supporting this kind of intelligence—the human brain. Image Credit: Intel Read More Intel Jumps Into Brain-Like Computing With New Self-Learning Chip : http://ift.tt/2xDTCHuGetty Apple may be considering releasing a laptop in the future that uses its own chips instead of a chip from Intel, according to a report from Nikkei on Friday. Apple's iPhones use an Apple-designed chip based on the ARM instruction set, but its current lineup of Mac laptops and desktops use chips from Intel that run the x86 instruction set. It would be a significant engineering challenge to enable software designed for Intel chips to run on Apple's ARM-based processors. MacOS alone would be a huge effort. But given the dramatic performance gains and low power usage of Apple's recent iPhone chips, it's not surprising that Apple would consider it. In fact, according to some benchmarks, the recently launched iPhone 8 outperforms Apple's high-end MacBook Pro. Here's a chart that Apple used when launching the iPhone 7 in 2016: Apple Apple's ability to design its own semiconductors and other chips is a huge advantage over other smartphone makers, who typically buy off-the-shelf components from companies such as Qualcomm and Mediatek. Apple's head of chip engineering, Johny Srouji, was promoted to senior vice president — essentially joining CEO Tim Cook's inner circle —in late 2015. Apple is also looking to design its own chips for touch sensors and its own modem chip for iPhones, according to the Nikkei report. Currently, TSMC is believed to be Apple's primary manufacturer for its main chip design, the A-series processor. Get the latest Intel stock price here. Read More Apple is reportedly thinking about ditching Intel and making its own chip for Mac laptops : http://ift.tt/2xCnlAEShortly after Apple's (NASDAQ:AAPL) iPhone 8 went up for sale, iFixit got hold of one, tore it down, and, with some help from the chip experts at TechInsights, managed to identify many of the key chips inside it and publish its findings. As is the case with each new generation of iPhones, there's a lot of cool stuff inside the iPhone 8 and 8 Plus. Perhaps the most surprising revelation from the tear-down, though -- at least to this Fool -- was Apple's selection of a cellular modem in the versions of the phones destined for networks that require support for the CDMA standard. Since the launch of the iPhone 7 series of smartphones, Apple has built versions of its iPhones that are compatible with networks that utilize the CDMA standard, as well as versions that aren't. Apple doesn't build separate models like this for no reason. It wishes to dual-source cellular modems from both market leader Qualcomm (NASDAQ:QCOM) and fast follower Intel (NASDAQ:INTC), but Intel's modems don't yet support CDMA networks -- though Intel is prepping a new modem for launch next year that will finally support the standard. Tear-down reports have been published for both the Intel- and Qualcomm-powered versions of the new iPhones. The Intel-powered phones use Intel's new XMM 7480 LTE modem, which was completely expected. What wasn't expected was that the Qualcomm-based iPhones use Qualcomm's most capable standalone LTE modem, known as the Snapdragon X16. What's the big deal?Ahead of the publication of the tear-down reports, I had expected Apple to choose Qualcomm's relatively mature Snapdragon X12 LTE modem for the Qualcomm-based versions of this year's iPhone. The Snapdragon X12 is a generation old at this point, but in terms of peak theoretical speeds, it's comparable to Intel's new XMM 7480. Apple reportedly throttled the X12 in the iPhone 7-series smartphones to bring them to parity with the Intel XMM 7360, and since I expected Apple to want to maintain feature parity between the two devices, it seemed reasonable that Apple would use an older-generation Qualcomm modem, if only to save some money, since older components tend to cost less than newer ones. That expectation was clearly wrong. The Snapdragon X16 inside the Qualcomm-based iPhone 8 and 8 Plus offers higher peak download speeds than the XMM 7480 in the Intel-based iPhones -- a gigabit per second for the Qualcomm modem, and just 600 megabits per second for the Intel modem. At this point, it's not clear if Apple is intentionally throttling the Qualcomm-based iPhones, or if the Qualcomm-based iPhones can achieve higher peak speeds than their Intel-based counterparts. Either way, the significant disparity in potential cellular capabilities between the Qualcomm-based iPhones and the Intel-based ones may explain why Apple avoided calling attention to the cellular capabilities of its new iPhones when it announced them. Some implicationsIn addition to the fact that Qualcomm-based iPhones have chips capable of faster speeds than the Intel-based iPhones, there's another thing to consider: power efficiency and battery life. Intel hasn't disclosed what manufacturing technology its XMM 7480 is built on, but considering Intel's silence on the matter, I'd imagine that it's either a foundry 28nm or 20nm technology. By contrast, the Snapdragon X16 is known to be manufactured using Samsung's (NASDAQOTH:SSNLF) 14nm LPP manufacturing technology. If I'm right that the 7480 is being built using a foundry 28nm or 20nm technology, that could mean a sizable power consumption advantage for the Qualcomm part simply by being built on a more advanced technology. Looking toward the futureApple seems to be increasingly serious about using the best cellular modems in its phones -- subject, of course, to the constraint of needing to dual-source this component. To that end, I expect that Apple's next iPhone will use Intel's upcoming XMM 7560 LTE modem in some models and Qualcomm's upcoming Snapdragon X20 LTE modem in others. On paper, the Snapdragon X20 LTE modem should be superior to Intel's XMM 7560, as Qualcomm claims peak download speeds of 1.2 gigabits per second for the X20, compared with Intel's 1 gigabit per second for the XMM 7560. But the gap between the Intel-based phones and the Qualcomm-based ones should at least continue to narrow. Moreover, the two next-generation modems should be more comparable in terms of manufacturing technology. The Snapdragon X20 is expected to be built using Samsung's 10nm LPE technology, which should be denser than Intel's 14nm technology, but those technologies should be in the same ballpark in terms of chip size and power efficiency. Ashraf Eassa owns shares of Intel and Qualcomm. The Motley Fool owns shares of and recommends Apple. The Motley Fool owns shares of Qualcomm. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy. To get a sense of computer scientist Naveen Rao, just take a look at his hands. The 42-year-old has busted all 10 of his fingers over a lifetime of skiing, skateboarding, bicycling, rollerblading, race-car driving, wrestling and hoops. He's not a clod; he's a risk taker who pushes physical and mental boundaries. On the mental side, he's trying to quicken the computer industry's move into a new age of artificial intelligence by creating chips and software inspired by the structure of the human brain. What sets Rao apart from others attempting the same thing is the fact that Intel last year bought his San Diego company, Nervana, for $400 million. That's some stamp of approval. Intel is a computer-chip industry giant, with sales topping $60 billion a year. But it's an aging giant. Intel turns 50 next year, and everyone agrees it requires some revitalization, having infamously missed the industry's massive shift to mobile computing. Now Intel is trying to catch the industry's next rising wave: artificial intelligence, or, more precisely, a subset of AI known as deep learning. Cars that drive themselves, personal devices that converse with their owners and carry out tasks, social media features that can identify your friends in group photographs - all are made possible by deep learning. So are computers that can diagnose CAT-scan images on their own or pick stocks for hedge fund managers. Snap a picture of a road sign in Spain and your phone translates it into English. That app is based on deep learning. Recommendations just for you on Netflix and Spotify? Deep learning plays a role in those features, too. "We can now solve problems that we couldn't solve before," said Peter Norvig, director of research at Google and a key figure in the history of AI. Deep learning is a new, more marketing-friendly term for a concept that's been around for decades, known as neural networks. Rather than run calculations in serial fashion, like traditional computers, neural networks mimic the behavior of nerve clusters in the human brain - firing signals and stifling them, arranging data into patterns roughly analogous to human memory. Why is deep learning suddenly on the rise? The enormous amount of data being created around the world from sensors mounted in everything from smartphones to drones to surveillance cameras and more, matched with ever more powerful chips and other computer hardware, make it possible. As a result, the computer industry is changing "faster than Intel or anyone else expected it to," said Mario Morales, semiconductor analyst at IDC. Experts say artificial intelligence is now a major growth business that will rake in billions of dollars for companies that provide the best hardware and software. That's where Rao comes in. His job is to help Intel move beyond the central processing unit, or CPU - the product that's ruled the semiconductor industry for several decades. The CPU is at the heart of every desktop and laptop computer built since the start of the personal computer revolution in the 1970s. Two companies dominated that field: Microsoft, with its operating system software, and Intel, with its "Intel Inside" central processors. Deep learning software runs on CPUs, but inefficiently. CPUs are inherently general purpose devices. Maximum computing power required for deep learning is possible only with new kinds of special purpose chips. The holy grail is a new kind of chip tailor-made for deep learning. That's what Rao and Nervana are attempting. If they succeed, Intel too will bear the fruit. Rao sports an athlete's build in his T-shirt and blue jeans. A child of Indian immigrants, he grew up in a tiny eastern Kentucky town called Whitesburg. In the late 1970s, his father, a physicist turned physician, drove Rao and his brother to a Radio Shack in Hazard, 40 minutes away, where they wrote programs in Basic on floor-model computers. "We didn't have a computer at home yet," he says. After a rural childhood that mixed outdoor sports with Dungeons & Dragons and novels by Asimov, Heinlein and Tolkien, Rao attended Duke University. There, he was attracted to neural networks after learning how the human eye detected edges on visual objects. Edge detection at the time was a cutting-edge problem in computer vision, with a number of solutions being offered up, including neural networks. He cut his teeth on computer chips at Sun Microsystems in Silicon Valley. Still drawn by all things neural, he earned a Ph.D. at Brown University under neuroscience pioneer John Donoghue. He headed back West to chipmaker Qualcomm in San Diego, on a team conducting neural net research. Qualcomm is doing interesting work, Rao said. But he had specific ideas of his own and craved the freedom of a startup. In 2014, he founded Nervana. It wasn't long before larger companies took an interest. When Intel found out another big tech company was sniffing around, Rao says, it moved quickly. Which was fine with Rao. More than anything, he sought a company that excelled at chip manufacturing. "The best at that is Intel," he said. Analysts concur. "They have the best manufacturing facilities in the world," said Linley Gwennap, founder of The Linley Group and longtime semiconductor industry analyst. "With Intel, you also have the advantage of walking into a market with Intel's name, Intel's resources." Nervana is crafting a neural net computer chip that Intel will release by the end of the year, known as Lake Crest. Just as important, Intel and Nervana are coming up with sets of software tools developers can use to write deep learning programs. A chip company called Nvidia, a rising star in Silicon Valley, has an early lead on Intel and all other neural net newcomers, including Advanced Micro Devices. Gamers know Nvidia well. The company started out making graphics accelerators that worked with Intel chips to keep computer games from lagging. Nvidia continued to improve the chips as gamers demanded action in real time so they could compete with players online. The company's graphical processing units, or GPUs, were designed to cram thousands of "cores" into one chip that processed information in parallel. It turned out GPUs lend themselves to neural networks, too. Nvidia discovered this around the time of what's become known as the Google Cat Project, considered a milestone in deep learning research. Show a neural network thousands of pictures of cats, and it learns how to recognize photos that include images of cats. The Google cat project in 2012 was carried out on 16,000 Intel central processors inside Google's vast "farms" of computer servers. It took a month. Google could afford to do that, but not many others could. Around the same time, forward-thinking hedge fund managers were trying to adapt deep learning to stock picking. They began using chips from Nvidia, whose graphic chips lent themselves to neural net processing better than anything made by Intel. When Nvidia found out hedge funds and others were using its chips for deep learning, it made a quick strategic move: tailor its chips and develop software tools to support neural networks. Almost every major automaker is using Nvidia chips to develop driverless-car technology. Google, Amazon and others have been adding Nvidia chips to their data centers at a furious clip. A few numbers illustrate the challenge Intel faces. Nvidia stock is up more than 165 percent over the past 52 weeks, closing at $171.96 on Tuesday. Over the same period, Intel stock is up 2.2 percent, closing on Tuesday at $37.47. Nvidia revenue grew to $6.9 billion in 2016, up 38 percent. Intel was up 7.3 percent, to $59.3 billion. Nvidia does have a much smaller revenue base. But consider profits: Nvidia up 171 percent to $1.6 billion, Intel down 9.7 percent to $10.3 billion. Rao said the deep learning "kind of fell into Nvidia's lap." He added: "To their credit, they took full advantage of it." In a statement, Nvidia affirmed its commitment to artificial intelligence. "Artificial intelligence is driving the greatest technology advances known to humankind," the company said. "From diagnosing skin cancer using a photo to making our roads safer with self-driving cars, AI will automate intelligence and spur a wave of social progress unmatched since the industrial revolution." Rao sees a way to surpass Nvidia with chips designed not for computer games, but specifically for neural networks. He'll have to integrate them into the rest of Intel's business. Artificial intelligence chips won't work on their own. For a time, they'll be tied into Intel's CPUs at cloud data centers around the world, where Intel CPUs still dominate - often in concert with Nvidia chips. Intel is on an acquisition spree. In 2016, it acquired Movidius, a Silicon Valley company that specializes in making smart vision chips for consumer devices, including drones. Earlier this year, it paid a whopping $15.3 billion for Mobileye, an Israeli maker of camera, chip and software systems for driverless cars. It's also partnered with a company that specializes in custom chips for specific applications, and bought another that makes chips whose "firmware" can be reprogrammed depending on the job at hand. It needs to pull all those pieces together. When Intel bought Nervana, it deemed the small company the "foundation" of its foray into artificial intelligence. It put Rao in charge of all of Intel's AI efforts, reporting directly to Intel Chief Executive Brian Krzanich. (Krzanich was unavailable for an interview.) But pulling off the merger will be tricky, in terms of culture and technical execution. Intel suffers a poor record with startup acquisitions, analyst Gwennap said. Intel had a wireless-device chip unit before Apple introduced the iPhone, he noted, but sold it to Marvell Technology Group in 2006. To try to correct its error, Intel bought the wireless unit of Infineon Technologies in 2010, but culture clashes and disagreements about technology knocked Intel out of the mobile game. Nimbler companies such as ARM Holdings (now part of SoftBank) ran away with mobile chips business. Steve Jurvetson of DFJ Venture Capital, an early investor in Nervana, said Rao's got the chops to help pull off a cultural change at Intel, if Intel will let him. "He's a polymath, and he's brilliant in his ability to integrate ideas across a lot of disciplines," said Steve Jurvetson of DFJ Venture Capital. "He has a warm professorial kind of manner. He enjoys teaching others and debating the viability of their ideas. But he won't hold back if he thinks something's a bad idea." Rao said he's encountered resistance from some of Intel's old guard, but believes the company has left its Intel-always-knows-best past behind it. Buying Nervana "is not something the Intel of five years ago would have done," he said. "It's an open culture. I can say, 'Guys, you aren't getting this.'" Sometimes, he's found himself pounding on the table in meetings. But he said he's confident that Krzanich has his back. "Intel has an opportunity to match or surpass the kind of performance Nvidia is talking about," Gwennap said. "But just because you start from a blank sheet of paper doesn't mean you're going to come up with a masterpiece. Nothing is guaranteed." Explore further:Intel buys artificial intelligence startup Read More Can this guy help Intel catch the AI wave? : http://ift.tt/2ybCDNxCLOSE
Sources say Jared Kushner wasn't the only White House adviser who used a personal email account to discuss government business. Video provided by Newsy Newslook The Senate Intelligence Committee said President Trump's adviser and son-in-law Jared Kushner failed to disclose that he had used a private email address to conduct official White House business. On Thursday, committee Chairman Richard Burr, R-N.C., and Vice Chairman Mark Warner, D-Va., sent a letter to Kushner and his lawyer, Abbe Lowell, saying they were "concerned" to learn about the email address from news reports instead of Kushner himself. Kushner appeared before the committee in July in a closed session, as part of the committee's probe in Russian meddling in the 2016 presidential election and possible collusion with Trump associates. More: Now Trump's team has to deal with private email questions More: Jared Kushner's private emails: Here's what you need to know More: Lawmakers begin probe into White House, federal government's use of private emails "As you are aware, this committee has previously requested that you preserve and produce certain documents related to the Russian inquiry—including, but not limited to, email communications," says the letter, first reported by CNN. The senators sent their letter just days after Lowell confirmed to news organizations that Kushner used a private email address to conduct some official White House business. According to Lowell, the email was created during the transition and had fewer than 100 email exchanges from January through August. The committee requested that Kushner confirm that the documents he provided to the committee included the additional personal email account referenced by Lowell. They also asked for any other email accounts, messaging apps or similar forms of communications channels Kushner may have used "relevant to our inquiry."
Read or Share this story: https://usat.ly/2yvFc9d
Read More Jared Kushner didn't disclose personal email account to Senate Intel committee: report : http://ift.tt/2yLIhTM
|
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
January 2020
Categories |