顯示具有 雲端產業 標籤的文章。 顯示所有文章
顯示具有 雲端產業 標籤的文章。 顯示所有文章

2015年1月7日 星期三

從機器學習到人類社會結構的革命 - 20年內巨型雲端機器人將改變人類社會 ( From machine learning to human social structures Revolution - the giant cloud robot will change human society within 20 years )

It used to be that if you wanted to get a computer to do something new, you would have to program it. Now, programming, for those of you here that haven't done it yourself, requires laying out in excruciating detail every single step that you want the computer to do in order to achieve your goal. Now, if you want to do something that you don't know how to do yourself, then this is going to be a great challenge.

So this was the challenge faced by this man, Arthur Samuel. In 1956, he wanted to get this computer to be able to beat him at checkers. How can you write a program, lay out in excruciating detail, how to be better than you at checkers? So he came up with an idea: he had the computer play against itself thousands of times and learn how to play checkers. And indeed it worked, and in fact, by 1962, this computer had beaten the Connecticut state champion.

So Arthur Samuel was the father of machine learning, and I have a great debt to him, because I am a machine learning practitioner. I was the president of Kaggle, a community of over 200,000 machine learning practictioners. Kaggle puts up competitions to try and get them to solve previously unsolved problems, and it's been successful hundreds of times. So from this vantage point, I was able to find out a lot about what machine learning can do in the past, can do today, and what it could do in the future. Perhaps the first big success of machine learning commercially was Google. Google showed that it is possible to find information by using a computer algorithm, and this algorithm is based on machine learning. Since that time, there have been many commercial successes of machine learning. Companies like Amazon and Netflix use machine learning to suggest products that you might like to buy, movies that you might like to watch. Sometimes, it's almost creepy. Companies like LinkedIn and Facebook sometimes will tell you about who your friends might be and you have no idea how it did it, and this is because it's using the power of machine learning. These are algorithms that have learned how to do this from data rather than being programmed by hand.

This is also how IBM was successful in getting Watson to beat the two world champions at "Jeopardy," answering incredibly subtle and complex questions like this one. ["The ancient 'Lion of Nimrud' went missing from this city's national museum in 2003 (along with a lot of other stuff)"] This is also why we are now able to see the first self-driving cars. If you want to be able to tell the difference between, say, a tree and a pedestrian, well, that's pretty important. We don't know how to write those programs by hand, but with machine learning, this is now possible. And in fact, this car has driven over a million miles without any accidents on regular roads.

So we now know that computers can learn, and computers can learn to do things that we actually sometimes don't know how to do ourselves, or maybe can do them better than us. One of the most amazing examples I've seen of machine learning happened on a project that I ran at Kaggle where a team run by a guy called Geoffrey Hinton from the University of Toronto won a competition for automatic drug discovery. Now, what was extraordinary here is not just that they beat all of the algorithms developed by Merck or the international academic community, but nobody on the team had any background in chemistry or biology or life sciences, and they did it in two weeks. How did they do this? They used an extraordinary algorithm called deep learning. So important was this that in fact the success was covered in The New York Times in a front page article a few weeks later. This is Geoffrey Hinton here on the left-hand side. Deep learning is an algorithm inspired by how the human brain works, and as a result it's an algorithm which has no theoretical limitations on what it can do. The more data you give it and the more computation time you give it, the better it gets.

The New York Times also showed in this article another extraordinary result of deep learning which I'm going to show you now. It shows that computers can listen and understand.

(Video) Richard Rashid: Now, the last step that I want to be able to take in this process is to actually speak to you in Chinese. Now the key thing there is, we've been able to take a large amount of information from many Chinese speakers and produce a text-to-speech system that takes Chinese text and converts it into Chinese language, and then we've taken an hour or so of my own voice and we've used that to modulate the standard text-to-speech system so that it would sound like me. Again, the result's not perfect. There are in fact quite a few errors. (In Chinese) (Applause) There's much work to be done in this area. (In Chinese)

Jeremy Howard: Well, that was at a machine learning conference in China. It's not often, actually, at academic conferences that you do hear spontaneous applause, although of course sometimes at TEDx conferences, feel free. Everything you saw there was happening with deep learning. (Applause) Thank you. The transcription in English was deep learning. The translation to Chinese and the text in the top right, deep learning, and the construction of the voice was deep learning as well.

So deep learning is this extraordinary thing. It's a single algorithm that can seem to do almost anything, and I discovered that a year earlier, it had also learned to see. In this obscure competition from Germany called the German Traffic Sign Recognition Benchmark, deep learning had learned to recognize traffic signs like this one. Not only could it recognize the traffic signs better than any other algorithm, the leaderboard actually showed it was better than people, about twice as good as people. So by 2011, we had the first example of computers that can see better than people. Since that time, a lot has happened. In 2012, Google announced that they had a deep learning algorithm watch YouTube videos and crunched the data on 16,000 computers for a month, and the computer independently learned about concepts such as people and cats just by watching the videos. This is much like the way that humans learn. Humans don't learn by being told what they see, but by learning for themselves what these things are. Also in 2012, Geoffrey Hinton, who we saw earlier, won the very popular ImageNet competition, looking to try to figure out from one and a half million images what they're pictures of. As of 2014, we're now down to a six percent error rate in image recognition. This is better than people, again.

So machines really are doing an extraordinarily good job of this, and it is now being used in industry. For example, Google announced last year that they had mapped every single location in France in two hours, and the way they did it was that they fed street view images into a deep learning algorithm to recognize and read street numbers. Imagine how long it would have taken before: dozens of people, many years. This is also happening in China. Baidu is kind of the Chinese Google, I guess, and what you see here in the top left is an example of a picture that I uploaded to Baidu's deep learning system, and underneath you can see that the system has understood what that picture is and found similar images. The similar images actually have similar backgrounds, similar directions of the faces, even some with their tongue out. This is not clearly looking at the text of a web page. All I uploaded was an image. So we now have computers which really understand what they see and can therefore search databases of hundreds of millions of images in real time.

So what does it mean now that computers can see? Well, it's not just that computers can see. In fact, deep learning has done more than that. Complex, nuanced sentences like this one are now understandable with deep learning algorithms. As you can see here, this Stanford-based system showing the red dot at the top has figured out that this sentence is expressing negative sentiment. Deep learning now in fact is near human performance at understanding what sentences are about and what it is saying about those things. Also, deep learning has been used to read Chinese, again at about native Chinese speaker level. This algorithm developed out of Switzerland by people, none of whom speak or understand any Chinese. As I say, using deep learning is about the best system in the world for this, even compared to native human understanding.

This is a system that we put together at my company which shows putting all this stuff together. These are pictures which have no text attached, and as I'm typing in here sentences, in real time it's understanding these pictures and figuring out what they're about and finding pictures that are similar to the text that I'm writing. So you can see, it's actually understanding my sentences and actually understanding these pictures. I know that you've seen something like this on Google, where you can type in things and it will show you pictures, but actually what it's doing is it's searching the webpage for the text. This is very different from actually understanding the images. This is something that computers have only been able to do for the first time in the last few months.

So we can see now that computers can not only see but they can also read, and, of course, we've shown that they can understand what they hear. Perhaps not surprising now that I'm going to tell you they can write. Here is some text that I generated using a deep learning algorithm yesterday. And here is some text that an algorithm out of Stanford generated. Each of these sentences was generated by a deep learning algorithm to describe each of those pictures. This algorithm before has never seen a man in a black shirt playing a guitar. It's seen a man before, it's seen black before, it's seen a guitar before, but it has independently generated this novel description of this picture. We're still not quite at human performance here, but we're close. In tests, humans prefer the computer-generated caption one out of four times. Now this system is now only two weeks old, so probably within the next year, the computer algorithm will be well past human performance at the rate things are going. So computers can also write.

So we put all this together and it leads to very exciting opportunities. For example, in medicine, a team in Boston announced that they had discovered dozens of new clinically relevant features of tumors which help doctors make a prognosis of a cancer. Very similarly, in Stanford, a group there announced that, looking at tissues under magnification, they've developed a machine learning-based system which in fact is better than human pathologists at predicting survival rates for cancer sufferers. In both of these cases, not only were the predictions more accurate, but they generated new insightful science. In the radiology case, they were new clinical indicators that humans can understand. In this pathology case, the computer system actually discovered that the cells around the cancer are as important as the cancer cells themselves in making a diagnosis. This is the opposite of what pathologists had been taught for decades. In each of those two cases, they were systems developed by a combination of medical experts and machine learning experts, but as of last year, we're now beyond that too. This is an example of identifying cancerous areas of human tissue under a microscope. The system being shown here can identify those areas more accurately, or about as accurately, as human pathologists, but was built entirely with deep learning using no medical expertise by people who have no background in the field. Similarly, here, this neuron segmentation. We can now segment neurons about as accurately as humans can, but this system was developed with deep learning using people with no previous background in medicine.

So myself, as somebody with no previous background in medicine, I seem to be entirely well qualified to start a new medical company, which I did. I was kind of terrified of doing it, but the theory seemed to suggest that it ought to be possible to do very useful medicine using just these data analytic techniques. And thankfully, the feedback has been fantastic, not just from the media but from the medical community, who have been very supportive. The theory is that we can take the middle part of the medical process and turn that into data analysis as much as possible, leaving doctors to do what they're best at. I want to give you an example. It now takes us about 15 minutes to generate a new medical diagnostic test and I'll show you that in real time now, but I've compressed it down to three minutes by cutting some pieces out. Rather than showing you creating a medical diagnostic test, I'm going to show you a diagnostic test of car images, because that's something we can all understand.

So here we're starting with about 1.5 million car images, and I want to create something that can split them into the angle of the photo that's being taken. So these images are entirely unlabeled, so I have to start from scratch. With our deep learning algorithm, it can automatically identify areas of structure in these images. So the nice thing is that the human and the computer can now work together. So the human, as you can see here, is telling the computer about areas of interest which it wants the computer then to try and use to improve its algorithm. Now, these deep learning systems actually are in 16,000-dimensional space, so you can see here the computer rotating this through that space, trying to find new areas of structure. And when it does so successfully, the human who is driving it can then point out the areas that are interesting. So here, the computer has successfully found areas, for example, angles. So as we go through this process, we're gradually telling the computer more and more about the kinds of structures we're looking for. You can imagine in a diagnostic test this would be a pathologist identifying areas of pathosis, for example, or a radiologist indicating potentially troublesome nodules. And sometimes it can be difficult for the algorithm. In this case, it got kind of confused. The fronts and the backs of the cars are all mixed up. So here we have to be a bit more careful, manually selecting these fronts as opposed to the backs, then telling the computer that this is a type of group that we're interested in.

So we do that for a while, we skip over a little bit, and then we train the machine learning algorithm based on these couple of hundred things, and we hope that it's gotten a lot better. You can see, it's now started to fade some of these pictures out, showing us that it already is recognizing how to understand some of these itself. We can then use this concept of similar images, and using similar images, you can now see, the computer at this point is able to entirely find just the fronts of cars. So at this point, the human can tell the computer, okay, yes, you've done a good job of that.

Sometimes, of course, even at this point it's still difficult to separate out groups. In this case, even after we let the computer try to rotate this for a while, we still find that the left sides and the right sides pictures are all mixed up together. So we can again give the computer some hints, and we say, okay, try and find a projection that separates out the left sides and the right sides as much as possible using this deep learning algorithm. And giving it that hint -- ah, okay, it's been successful. It's managed to find a way of thinking about these objects that's separated out these together.

So you get the idea here. This is a case not where the human is being replaced by a computer, but where they're working together. What we're doing here is we're replacing something that used to take a team of five or six people about seven years and replacing it with something that takes 15 minutes for one person acting alone.

So this process takes about four or five iterations. You can see we now have 62 percent of our 1.5 million images classified correctly. And at this point, we can start to quite quickly grab whole big sections, check through them to make sure that there's no mistakes. Where there are mistakes, we can let the computer know about them. And using this kind of process for each of the different groups, we are now up to an 80 percent success rate in classifying the 1.5 million images. And at this point, it's just a case of finding the small number that aren't classified correctly, and trying to understand why. And using that approach, by 15 minutes we get to 97 percent classification rates.
So this kind of technique could allow us to fix a major problem, which is that there's a lack of medical expertise in the world. The World Economic Forum says that there's between a 10x and a 20x shortage of physicians in the developing world, and it would take about 300 years to train enough people to fix that problem. So imagine if we can help enhance their efficiency using these deep learning approaches?

So I'm very excited about the opportunities. I'm also concerned about the problems. The problem here is that every area in blue on this map is somewhere where services are over 80 percent of employment. What are services? These are services. These are also the exact things that computers have just learned how to do. So 80 percent of the world's employment in the developed world is stuff that computers have just learned how to do. What does that mean? Well, it'll be fine. They'll be replaced by other jobs. For example, there will be more jobs for data scientists. Well, not really. It doesn't take data scientists very long to build these things. For example, these four algorithms were all built by the same guy. So if you think, oh, it's all happened before, we've seen the results in the past of when new things come along and they get replaced by new jobs, what are these new jobs going to be? It's very hard for us to estimate this, because human performance grows at this gradual rate, but we now have a system, deep learning, that we know actually grows in capability exponentially. And we're here. So currently, we see the things around us and we say, "Oh, computers are still pretty dumb." Right? But in five years' time, computers will be off this chart. So we need to be starting to think about this capability right now.

We have seen this once before, of course. In the Industrial Revolution, we saw a step change in capability thanks to engines. The thing is, though, that after a while, things flattened out. There was social disruption, but once engines were used to generate power in all the situations, things really settled down. The Machine Learning Revolution is going to be very different from the Industrial Revolution, because the Machine Learning Revolution, it never settles down. The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before, so your previous understanding of what's possible is different.

This is already impacting us. In the last 25 years, as capital productivity has increased, labor productivity has been flat, in fact even a little bit down.

So I want us to start having this discussion now. I know that when I often tell people about this situation, people can be quite dismissive. Well, computers can't really think, they don't emote, they don't understand poetry, we don't really understand how they work. So what? Computers right now can do the things that humans spend most of their time being paid to do, so now's the time to start thinking about how we're going to adjust our social structures and economic structures to be aware of this new reality. Thank you!


2014年9月27日 星期六

太陽能應用在無人機連網計畫 - 將改變開發中國家通訊系統 ( Solar Airplane Autopilot Will Change the Global Wireless Communication Network - Specially in developing country )

Facebook連網無人機「豈止於大」:波音747大小,太陽能供電可長時期飛行

Facebook Connectivity Lab工程總監參與由Mashable主辦的2014 Social Good Summit時透露,他們準備打造的無人機具備太陽能供電能力,大小類似商業用的747機種,但重量會更輕。實驗室中的一架原型機的長度約等同於6~7台的Prius車種,但只有4個Prius輪胎的重量。

Facebook Connectivity Lab工程總監Yael Maguire在參與由Mashable主辦的2014 Social Good Summit時透露,他們準備打造的無人機將不是傳統概念上的小型飛機,而是體型類似商用747飛機大小,且可仰賴太陽能供電、可經年累月飛行的大型無人機。

Connectivity Lab 是 Faceboook 於今年3月發表的無人機實驗室,集結了航空、噴射機,與通訊專家,旨在打造可提供網路服務的無人機,以協助推動 Internet.org 的全球連網服務計畫。當時 Facebook 也宣布收購了英國專門開發以太陽能供電的無人機 Ascenta 開發團隊,以加速這項計畫。

Maguire 在與 Mashable 執行長 Pete Cashmore 對談時表示,為了讓這些無人機得以在天上停留數月或數年的時間,它們必須飛行在6萬~9萬英呎高的天上,一般的飛機並不會飛行得那麼高。除了緯度的限制外,此一無人機必須具備太陽能供電能力,大小類似商業用的747機種,但重量會更輕。實驗室中的一架原型機的長度約等同於6~7台的 Prius 車種,但只有4個 Prius 輪胎的重量。

由波音所打造的 747 飛機可用來載運乘客或貨物,其中,最常見的 747-400 民航機可根據空間配置載運 416~660 名乘客。

Facebook 的無人機連網計畫鎖定開發中國家,並已列出21個位於拉丁美州、非洲與亞洲的國家為優先部署地區,且飛機與太陽能板也必須依據這些地區的日照進行設計。

雖然目前仍面臨許多技術及法令的問題,但 Facebook 計畫在明年至少讓一架無人飛機升空測試。

Facebook drones the size of jumbo jets to soar 17 miles up

Facebook will create thousands of drones the size of jumbo jets which will fly 17 miles above the Earth to provide wireless internet access to the four billion people currently unable to get online.

The social network announced in March that it was in negotiations to buy drone maker Titan Aerospace, which was subsequently snapped-up by Google. Now it seems that the company is developing its own drones instead.

Today, only 2.7 billion people – just over one-third of the world's population – have access to the internet, according to Facebook. The social networking company is one of the main backers of the internet.org project which aims to connect the large parts of the world which remain offline.

Initially it was thought that Facebook would create around 11,000 smaller drones with the help of Titan Aerospace. But a senior engineer has now revealed that the company’s plan B is far more ambitious even than that.
"We're going to have to push the edge of solar technology, battery technology, composite technology," said Yael Maguire, the leader of Facebook's new Connectivity Lab, during a panel session at the Social Good Summit in New York this week. "There are a whole bunch of challenges."

To fly for months and years at a time the drones will need to rise above the weather, flying at between 60,000 and 90,000 feet – around 17 miles above the ground.

Flying this high will solve problems associated with weather, but could throw up new legislative ones. Above 60,000 feet there are essentially no regulations on aircraft – commercial airlines routinely fly at around half of that altitude. Rules regarding satellites will “play a very useful role”, said Maguire, but the company will also have to “help pave new ground”.
Regulations regarding human operators will also need to be adjusted if the company’s plans are to be a success. Currently one person must be in control of an aircraft at all times, but Facebook hopes to change legislation so that one person can control ten or even a hundred partially-automated aircraft.

"We can't have one person per plane if we want to figure out how to connect the world,” said Maguire.

The aircraft will be “roughly the size of a commercial aircraft, like a 747” said Maguire, but they will be far, far lighter. One prototype currently being worked on is about the length of seven cars, but weighs the same as just four car tyres.

The planes will be tested at some point next year, somewhere in the US, and the company hopes to have them working and in operation over developing countries within three to five years. It has already chosen 21 locations around the world where it would like to deploy them, in Latin America, Asia and Africa, and is looking for charities to run the equipment once it is manufactured.
Google is also working on similar technology to Facebook, having bought drone manufacturer Titan Aerospace earlier this year. The company creates solar-powered drones which can fly for several years at a time.

A Google spokesperson said at the time of the takeover: "It’s still early days, but atmospheric satellites could help bring internet access to millions of people, and help solve other problems, including disaster relief and environmental damage like deforestation. "
The search giant also launched Project Loon in 2013 which is investigating the use of high-altitude weather balloons which can transmit internet signals to the ground for the same purpose.


Solar Impulse 2: Solar-Powered Plane to Fly 25 Days Continuously Around Globe

An enormous solar powered plane wider than a Boeing 747 jumbo jet will become the first of its kind to circumnavigate the globe without fuel when it takes off from Abu Dhabi next year.

The Swiss-made Solar Impulse 2 has 17,000 solar panels spread across a wingspan of 72m - four metres wider than a 747, which has a wingspan of 68m.

Pilots Bertrand Piccard and Andre Borschberg, who are also co-founders of firm Solar Impulse, will take it in turn to fly the plane, which would become the first aircraft to fly day and night without fuel or emissions.

The flight, which will depart from Abu Dhabi in March 2015 and has attracted the support of Richard Branson, aims to complete 25 flying-days around the globe before finally descending and touching down four months and 35,000km later in Abu Dhabi in July.

The single-seater airplane will fly over the Arabian Sea, India, Myanmar, China, the Pacific Ocean, the United States, the Atlantic Ocean and southern Europe or North Africa, it was announced on Thursday in New York.

Flights over the vast oceans of the Pacific and Atlantic will last five to six days, achieved by Solar Impulse 2's usage of solar power.

Piccard and Borschberg will have access to six oxygen bottles, a parachute, a life raft and food and water rations.

2014年7月10日 星期四

Google 從 Nest 到 Dropcam 點燃 - 智慧型家庭正在興起 ( Google strategy is to push IOT from Nest and Dropcam )

Nest 以 5.55 億收購了 Dropcam 公司

Nest 宣佈將以 5.55 億美金收購 Dropcam 這間 IP 相機公司。這次的收購是以 Nest 的名義,並在 Nest 的官方部落格上宣佈,所以交易理應是 Nest 作為一間獨立子公司的自我決定。Nest 表示交易完成之後,Dropcam 將會整合到 Nest 旗下,而且 Nest 將會以自家處理私隱資料的政策去處理來自 Dropcam 的資料,沒有客戶的批准之下,他們是不會將個人資料跟任何人分享;Nest 更註明就算是 Google 也不會。

Nest 表示他們花了很多時間去尋找來自世界各地的相機公司和技術,最後毫無保留地認為 Dropcam 的產品、服務和客戶體驗都是世上最好的,所以就決定買他們。至於交易完成後將會怎樣使用 Dropcam 的資產?Nest 沒有明確地告訴我們,只是說現在距離 Dropcam 產品跟 Nest 產品協作又走近一步;換言之兩公司的產品功能有可能會整合。另外,Dropcam 產品的用戶依然可以繼續使用他們的 Dropcam 帳號,以及繼續可以在他們的網上商店和其他實體店中買到 Dropcam 產品。

Google旗下Nest 成為新平台,廣招開發商打造智慧家庭

已經簽定合作的業者包括賓士車Mercedes-Benz, 運動腕帶廠商 Jawbone、家電商 Whirlpool、燈泡業者 LIFX、簡訊應用 IFTTT(IF This Then That) 及羅技。這些業者也都已推出與Nest整合的應用。

Google旗下的Nest宣佈開發商計畫,旨在讓Nest成為一個連結更多家用裝置及應用的平台。

Nest執行長Matt Rogers指出,Nest長久以來都在開發可和人與家庭互動的智慧產品,以確保居家的安全與舒適。Working with Nest開發商計畫將可和全世界開發商一起創造更有智慧的居家環境。Nest希望透過開發商計畫讓不同軟、硬體業者都能連結 Nest 恆溫控制器及其 Protect 煙霧偵測器資料,產生多種應用方式。他並強調,Nest要做的不只是一個可以遠端開關家中裝置的數位面板,而是要能安全連結所有日常事物,包括電燈、家庭、運動腕帶甚至車輛等等。

已經簽定合作的業者包括賓士車Mercedes-Benz, 運動腕帶廠商Jawbone、家電商Whirlpool、燈泡業者 LIFX、簡訊應用 IFTTT(IF This Then That) 及羅技。這些業者也都已推出與Nest整合的應用。

例如使用者可利用IFTTT設定「如果Nest Protect偵測到煙霧,就傳送簡訊給鄰居」。

LIFX的燈泡可在 Nest Protect 偵測到煙霧或一氧化碳時轉成閃紅燈,或是Nest恆溫計轉到「離開」(Away)模式時,LIFX可自動隨機開、關燈光,偽裝成家中有人的樣子。↓

羅技的 Ultimate 萬用遙控器除了可開啟電視、電燈外,現在也能控制Nest溫度感測器。

賓士和Nest溫度感測器整合後,使車主開車時就能知道並調整家中溫度。

透過惠而浦和Nest的連結,當Nest偵測到家中沒人時,就能啟動洗衣機及烘衣機的連續作業。

今年秋天還會有其他第三方開發商整合成果,包括車庫門控制器Chamberlain以及Nest的東家Google。利用Google語音辨識使用者可透過類似「OK Google,把溫度調到25度」的指令來控制Nest。此外用戶也可以用 Google Now 來和 Nest 互動。

Nest表示,有5000多家開發商對整合Nest有興趣,因此除了開發商召募計畫外,Nest並宣佈和創投業者 Kleiner Perkins Caufield & Byers 及 Google旗下的 Google Venturs 聯合成立Thoughtful Things Fund 基金提供新創公司的開發資金。

日前Nest才宣佈以5.55億美金收購網路攝影機 Dropcam。經由 Nest 及 Dropcam 的收購,將使Google進一步加速拓展智慧家庭的計畫,並超越蘋果等競爭對手。Nest並將在5月26日的Google I/O開發者大會上說明這項計畫。

Aros air conditioner by Quirky and GE is a sleek way to keep your cool this summer

Co-designed by former Department of Energy worker Garthen Leslie in a multimillion-dollar GE, this is a sleek, sexy unit for the smartphone generation. Bonus: maker Quirky has partnered with Uber for to-your-doorstep delivery service in New York City, a downright luxury when you think of the injustice of slogging a window unit from Home Depot in 99% humidity.
智慧型冷氣也出現
partnership with

The much buzzed-about air conditioning unit from Quirky – which was invented by a former U.S. Department of Energy flack – is one cool window unit.

Quirky has partnered with Uber for to-your-doorstep delivery service in New York City, a downright luxury when you think of the injustice of slogging a window unit from Home Depot on a day when there’s 99% humidity. And they’ve embraced the kitsch, with Quirky employees donning Hawaiian-print shirts and bussing around in a converted 1970s ice cream truck.

And then, there’s the Aros unit itself.

Co-designed by Garthen Leslie in a multimillion-dollar partnership with GE, this is a sleek, sexy unit for the smartphone generation. Rather than ugly horizontal grille that is prone to collect the various dust mites lurking around the city, Aros’ front is a pleasing array of dots, with the cold air blowing upwards instead of outwards.

趨勢分析

  • 智慧型家庭重點不是只有 IOT、雲聯結,是要有方便性、技術突破性、極高安全性例如可以節能30%、可以自動建議你飲食等等,否則,IOT智慧型家庭是難以普及。

2014年7月9日 星期三

智慧型手機週邊需求低於雲端需求? ( Smart Phone Accessory market growth rate is much less than cloud service )

Dropbox Is Now The Data Fabric Tying Together Devices For 100M Registered Users Who Save 1B Files A Day

“At this scale, when you help people save 10 minutes or an hour, you’re saving lifetimes of pain…And we’re just getting started.” That’s what CEO Drew Houston thinks about his company hitting 100 million registered users and 1 billion files saved a day. Here Houston tells me how he feels about being entrusted with so many memories, and how Dropbox will be the data layer connecting the future where every device is smart.

Drew’s an inspiring person to interview, so much so that I wrote a poem about Dropbox. But here’s the CEO’s thoughts.

“100 million registered users is a symbol, putting us in a new category with an elite handful of companies that have ever reached that audience” says Houston. That’s up from 50 million users and 500 million files saved every 48 hours as of May 2012, and 25 million users and 200 million files saved per day in April 2011. Dropbox is now on 250 million devices in over 200 countries, and is served in eight languages, including two new ones starting today: Italian and Castilian Spanish.

“It puts a quantitative spin on that feeling that we’re solving really important problems for a big chunk of the world,” not just Silicon Valley, Houston tells me. “Our users are trapeze artists, high school football coaches — I got cornered by a couple of theoretical physicists who said Dropbox lets them collaborate across the world and share their experiments’ results. They were raving about how it’s driving their research.”

Drew says Dropbox is fulfilling the promise of the cloud. For users, he believes “Dropbox is the first day of the rest of their life. I can take my laptop, throw it in the water, go to the Apple store, and start over like nothing happened.”

It really hit home for Drew when he heard the story of a panicked father who’d recorded the first years of his child’s life on his phone. Then one day he was pulling clothes out of the washing machine, heard a clanking sound, and saw his memories of his daughter dripping out of the phone. Then he remembered he’d turned on Dropbox Camera Upload, and all those moments were safe and sound.

Dropbox’s scale and that mission, to preserve our digital histories, are attracting great employees. Drew tells me that this year they “started at 90 people and just crossed 250. It’s the biggest growth year for us by a long shot. Now for any job opening, we can go after the top five people in the world. We want a designer? Ok, who built the iPhone?”

For example, Drew cites Aditya Agarwal, who used to be a director at Facebook but became Dropbox’s VP of engineering when it acquired his startup Cove. “Here’s someone who’s changed the world. He built news feed and [Facebook] search. He was employee No. 9. And he said ‘Dropbox is the only place I could have an impact like that again.’” The firepower to pull in employees like Agarwal is just one benefit of the $257 million in funding Dropbox has raised.

Dropbox has big ambitions, so it’s going to need the talent. While it might seem simple enough stitching together data from your laptop, phone, and tablet, as we enter the age of the “Internet of things,” it’s going to get a lot more complicated. Drew says Dropbox has a chance “to make your phone smarter, your TV smarter, your car smarter. In that sense we can be the fabric that ties everything together.”

The company’s independence might give it the best shot at becoming the data layer the way Facebook became the social layer. Apple, Google, and Microsoft all have their own cloud storage systems, but they don’t necessarily cooperate with each other’s devices. Drew declares: “Not one company is going to make everything. [With Dropbox] you don’t have to worry about what logo is on the back of your photo or computer.”

…Or your refrigerator, thermostat, or sound system. Becoming useful to people who don’t have all those fancy smart devices, just the basic ones like in emerging markets, is a big focus for the company. “There’s 2 billion connected internet users now and that’s going to go to five in the next few years. Anyone with a computer or a phone needs something like Dropbox.”

But there’s a lot of work to do to get to that point of being the omnipresent data layer that lets our personal memories and professional materials criss-cross between devices. “At our age, Apple hadn’t built the Macintosh, Microsoft hadn’t built Windows. We are now playing in another league and really have an opportunity to do amazing things at scale. This is the first stop on a new road to a billion.”

行動電源觀望 廠商營運添變數


Reference From 小巧無線存儲充電寶
行動電源市場變數增多,需求轉趨觀望,為晶片廠短期營運添增變數。隨著智慧手機及平板電腦等行動裝置市場持續高度成長,加上行動裝置朝向多核心及大螢幕發展,耗電量明顯提升,刺激行動電源市場需求近年同步高成長。國內微控制器(MCU)廠松翰及盛群近年拓展行動電源市場有不錯斬獲;其中,松翰去年行動電源產品出貨突破1億套大關,居國內行動電源晶片龍頭廠。

盛群去年行動電源產品出貨達3283萬套,較前年大增近 2倍水準。市場原本看好,盛群今年行動電源產品出貨量可望進一步突破4000萬套,將較去年再成長10%至20%。

Microsoft Azure Option
只是行動電源市場近期變數增多,需求轉趨觀望,為松翰及盛群等晶片廠短期營運表現添增變數。松翰表示,蘋果(Apple)即將於下半年推出的新機iPhone 6,備受各界關注,不僅蘋果迷高度期待,連行動電源業者也密切注意規格變化,近期需求轉趨觀望。

受行動電源產品出貨量明顯減少影響,松翰5、6月業績逐月滑落,與去年第2季業績逐月攀高表現大不同;6月合併營收新台幣3.25億元,月減5.54%,為近3個月新低水準。盛群指出,小米推出超低價行動電源產品,10400mAh大容量規格產品售價僅345元,5200mAh容量規格產品售價更低達255元,遠低於800至1000元市場行情價,對行動電源市場造成不小衝擊。

盛群表示,小米行動電源低價搶市,部分行動電源業者不願跟進,殺價競爭,客戶需求確實有轉趨觀望情況。除了行動電源市場需求降溫外,中國大陸持續打房,壓抑新屋銷售趨緩,連帶家電市場也遭受波及。

主打中國大陸家電市場的盛群,目前已感受到市場需求並未如先前預期樂觀;不過,盛群將透過推展新計畫,以彌平中國大陸家電市場降溫的影響。

松翰則看好網路攝影機及無線影音晶片產品發展前景,表示隨著網路及雲端環境漸趨成熟,包括網通業者及網路營運商紛紛推出智慧居家服務,帶動無線影音晶片等產品需求急遽升溫。松翰無線影音晶片產品前5月業績即較去年同期成長4成,是今年表現最佳的產品線,也是支撐整體業績得以成長的主要動力;松翰上半年合併營收18.11億元,年增1.93%。

雲端服務降價,競爭激烈

雲端價格戰開打!《紐約時報》報導,Google雲端業務來勢洶洶,挾帶削價85%、軟體更新服務等優勢,直逼亞馬遜(Amazon.com)的霸主地位。Google的雲端服務將減價30%至85%,雲端儲存費用降為每GB 0.026美元,比原先價格約低68%;運算引擎(Compute Engine)則不分區域、大小、等級,費用也將便宜32%,BigQuery數據分析服務價格約減85%。Google期望,未來可以將雲端業務轉型為整合應用程式和數據的產品,而不僅僅是許多分散的功能。

從策略角度來看,Google大砍雲端服務價格是相當聰明的做法。亞馬遜首創雲端運算商機,2006年開始出租電腦運算能力和儲存空間給其他企業。知名品牌如Netflix、Shell等都透過亞馬遜的平台來經營事業,但卻要付出昂貴的費用,許多較小的網路商吞不下成本,只好轉往雲端系統。

Google除了和亞馬遜、微軟、Rackspace Hosting Inc.等公司競爭,也需要和新的供應商搶客戶。研調機構Gartner預估,雲端服務市場規模去年達1310億美元。

Google科技基礎設施資深副總裁赫爾茲(Urs Holzle) 25日表示,雲端伺服器和儲存系統成本大幅下滑,但是雲端運算費用卻未隨之持續調降,他們認為兩者之間不該存在龐大價差。未來Google運算引擎(Compute Engine)服務將砍價約32%,應用程式開發平台(App Engine)也將降價30%。

此外,Google也已於13日宣布,「雲端硬碟(Google Drive)」服務費率也將大減價,100 GB雲端儲存空間的月費從先前的4.99美元調降40%至2.99美元,1 TB的月費更從先前的49.99美元大砍80%至9.99美元;而10 TB的月費為99.99美元,使用者還可加價換取更多儲存空間。

分析