顯示具有 未來新科技 標籤的文章。 顯示所有文章
顯示具有 未來新科技 標籤的文章。 顯示所有文章

2015年1月7日 星期三

從機器學習到人類社會結構的革命 - 20年內巨型雲端機器人將改變人類社會 ( From machine learning to human social structures Revolution - the giant cloud robot will change human society within 20 years )

It used to be that if you wanted to get a computer to do something new, you would have to program it. Now, programming, for those of you here that haven't done it yourself, requires laying out in excruciating detail every single step that you want the computer to do in order to achieve your goal. Now, if you want to do something that you don't know how to do yourself, then this is going to be a great challenge.

So this was the challenge faced by this man, Arthur Samuel. In 1956, he wanted to get this computer to be able to beat him at checkers. How can you write a program, lay out in excruciating detail, how to be better than you at checkers? So he came up with an idea: he had the computer play against itself thousands of times and learn how to play checkers. And indeed it worked, and in fact, by 1962, this computer had beaten the Connecticut state champion.

So Arthur Samuel was the father of machine learning, and I have a great debt to him, because I am a machine learning practitioner. I was the president of Kaggle, a community of over 200,000 machine learning practictioners. Kaggle puts up competitions to try and get them to solve previously unsolved problems, and it's been successful hundreds of times. So from this vantage point, I was able to find out a lot about what machine learning can do in the past, can do today, and what it could do in the future. Perhaps the first big success of machine learning commercially was Google. Google showed that it is possible to find information by using a computer algorithm, and this algorithm is based on machine learning. Since that time, there have been many commercial successes of machine learning. Companies like Amazon and Netflix use machine learning to suggest products that you might like to buy, movies that you might like to watch. Sometimes, it's almost creepy. Companies like LinkedIn and Facebook sometimes will tell you about who your friends might be and you have no idea how it did it, and this is because it's using the power of machine learning. These are algorithms that have learned how to do this from data rather than being programmed by hand.

This is also how IBM was successful in getting Watson to beat the two world champions at "Jeopardy," answering incredibly subtle and complex questions like this one. ["The ancient 'Lion of Nimrud' went missing from this city's national museum in 2003 (along with a lot of other stuff)"] This is also why we are now able to see the first self-driving cars. If you want to be able to tell the difference between, say, a tree and a pedestrian, well, that's pretty important. We don't know how to write those programs by hand, but with machine learning, this is now possible. And in fact, this car has driven over a million miles without any accidents on regular roads.

So we now know that computers can learn, and computers can learn to do things that we actually sometimes don't know how to do ourselves, or maybe can do them better than us. One of the most amazing examples I've seen of machine learning happened on a project that I ran at Kaggle where a team run by a guy called Geoffrey Hinton from the University of Toronto won a competition for automatic drug discovery. Now, what was extraordinary here is not just that they beat all of the algorithms developed by Merck or the international academic community, but nobody on the team had any background in chemistry or biology or life sciences, and they did it in two weeks. How did they do this? They used an extraordinary algorithm called deep learning. So important was this that in fact the success was covered in The New York Times in a front page article a few weeks later. This is Geoffrey Hinton here on the left-hand side. Deep learning is an algorithm inspired by how the human brain works, and as a result it's an algorithm which has no theoretical limitations on what it can do. The more data you give it and the more computation time you give it, the better it gets.

The New York Times also showed in this article another extraordinary result of deep learning which I'm going to show you now. It shows that computers can listen and understand.

(Video) Richard Rashid: Now, the last step that I want to be able to take in this process is to actually speak to you in Chinese. Now the key thing there is, we've been able to take a large amount of information from many Chinese speakers and produce a text-to-speech system that takes Chinese text and converts it into Chinese language, and then we've taken an hour or so of my own voice and we've used that to modulate the standard text-to-speech system so that it would sound like me. Again, the result's not perfect. There are in fact quite a few errors. (In Chinese) (Applause) There's much work to be done in this area. (In Chinese)

Jeremy Howard: Well, that was at a machine learning conference in China. It's not often, actually, at academic conferences that you do hear spontaneous applause, although of course sometimes at TEDx conferences, feel free. Everything you saw there was happening with deep learning. (Applause) Thank you. The transcription in English was deep learning. The translation to Chinese and the text in the top right, deep learning, and the construction of the voice was deep learning as well.

So deep learning is this extraordinary thing. It's a single algorithm that can seem to do almost anything, and I discovered that a year earlier, it had also learned to see. In this obscure competition from Germany called the German Traffic Sign Recognition Benchmark, deep learning had learned to recognize traffic signs like this one. Not only could it recognize the traffic signs better than any other algorithm, the leaderboard actually showed it was better than people, about twice as good as people. So by 2011, we had the first example of computers that can see better than people. Since that time, a lot has happened. In 2012, Google announced that they had a deep learning algorithm watch YouTube videos and crunched the data on 16,000 computers for a month, and the computer independently learned about concepts such as people and cats just by watching the videos. This is much like the way that humans learn. Humans don't learn by being told what they see, but by learning for themselves what these things are. Also in 2012, Geoffrey Hinton, who we saw earlier, won the very popular ImageNet competition, looking to try to figure out from one and a half million images what they're pictures of. As of 2014, we're now down to a six percent error rate in image recognition. This is better than people, again.

So machines really are doing an extraordinarily good job of this, and it is now being used in industry. For example, Google announced last year that they had mapped every single location in France in two hours, and the way they did it was that they fed street view images into a deep learning algorithm to recognize and read street numbers. Imagine how long it would have taken before: dozens of people, many years. This is also happening in China. Baidu is kind of the Chinese Google, I guess, and what you see here in the top left is an example of a picture that I uploaded to Baidu's deep learning system, and underneath you can see that the system has understood what that picture is and found similar images. The similar images actually have similar backgrounds, similar directions of the faces, even some with their tongue out. This is not clearly looking at the text of a web page. All I uploaded was an image. So we now have computers which really understand what they see and can therefore search databases of hundreds of millions of images in real time.

So what does it mean now that computers can see? Well, it's not just that computers can see. In fact, deep learning has done more than that. Complex, nuanced sentences like this one are now understandable with deep learning algorithms. As you can see here, this Stanford-based system showing the red dot at the top has figured out that this sentence is expressing negative sentiment. Deep learning now in fact is near human performance at understanding what sentences are about and what it is saying about those things. Also, deep learning has been used to read Chinese, again at about native Chinese speaker level. This algorithm developed out of Switzerland by people, none of whom speak or understand any Chinese. As I say, using deep learning is about the best system in the world for this, even compared to native human understanding.

This is a system that we put together at my company which shows putting all this stuff together. These are pictures which have no text attached, and as I'm typing in here sentences, in real time it's understanding these pictures and figuring out what they're about and finding pictures that are similar to the text that I'm writing. So you can see, it's actually understanding my sentences and actually understanding these pictures. I know that you've seen something like this on Google, where you can type in things and it will show you pictures, but actually what it's doing is it's searching the webpage for the text. This is very different from actually understanding the images. This is something that computers have only been able to do for the first time in the last few months.

So we can see now that computers can not only see but they can also read, and, of course, we've shown that they can understand what they hear. Perhaps not surprising now that I'm going to tell you they can write. Here is some text that I generated using a deep learning algorithm yesterday. And here is some text that an algorithm out of Stanford generated. Each of these sentences was generated by a deep learning algorithm to describe each of those pictures. This algorithm before has never seen a man in a black shirt playing a guitar. It's seen a man before, it's seen black before, it's seen a guitar before, but it has independently generated this novel description of this picture. We're still not quite at human performance here, but we're close. In tests, humans prefer the computer-generated caption one out of four times. Now this system is now only two weeks old, so probably within the next year, the computer algorithm will be well past human performance at the rate things are going. So computers can also write.

So we put all this together and it leads to very exciting opportunities. For example, in medicine, a team in Boston announced that they had discovered dozens of new clinically relevant features of tumors which help doctors make a prognosis of a cancer. Very similarly, in Stanford, a group there announced that, looking at tissues under magnification, they've developed a machine learning-based system which in fact is better than human pathologists at predicting survival rates for cancer sufferers. In both of these cases, not only were the predictions more accurate, but they generated new insightful science. In the radiology case, they were new clinical indicators that humans can understand. In this pathology case, the computer system actually discovered that the cells around the cancer are as important as the cancer cells themselves in making a diagnosis. This is the opposite of what pathologists had been taught for decades. In each of those two cases, they were systems developed by a combination of medical experts and machine learning experts, but as of last year, we're now beyond that too. This is an example of identifying cancerous areas of human tissue under a microscope. The system being shown here can identify those areas more accurately, or about as accurately, as human pathologists, but was built entirely with deep learning using no medical expertise by people who have no background in the field. Similarly, here, this neuron segmentation. We can now segment neurons about as accurately as humans can, but this system was developed with deep learning using people with no previous background in medicine.

So myself, as somebody with no previous background in medicine, I seem to be entirely well qualified to start a new medical company, which I did. I was kind of terrified of doing it, but the theory seemed to suggest that it ought to be possible to do very useful medicine using just these data analytic techniques. And thankfully, the feedback has been fantastic, not just from the media but from the medical community, who have been very supportive. The theory is that we can take the middle part of the medical process and turn that into data analysis as much as possible, leaving doctors to do what they're best at. I want to give you an example. It now takes us about 15 minutes to generate a new medical diagnostic test and I'll show you that in real time now, but I've compressed it down to three minutes by cutting some pieces out. Rather than showing you creating a medical diagnostic test, I'm going to show you a diagnostic test of car images, because that's something we can all understand.

So here we're starting with about 1.5 million car images, and I want to create something that can split them into the angle of the photo that's being taken. So these images are entirely unlabeled, so I have to start from scratch. With our deep learning algorithm, it can automatically identify areas of structure in these images. So the nice thing is that the human and the computer can now work together. So the human, as you can see here, is telling the computer about areas of interest which it wants the computer then to try and use to improve its algorithm. Now, these deep learning systems actually are in 16,000-dimensional space, so you can see here the computer rotating this through that space, trying to find new areas of structure. And when it does so successfully, the human who is driving it can then point out the areas that are interesting. So here, the computer has successfully found areas, for example, angles. So as we go through this process, we're gradually telling the computer more and more about the kinds of structures we're looking for. You can imagine in a diagnostic test this would be a pathologist identifying areas of pathosis, for example, or a radiologist indicating potentially troublesome nodules. And sometimes it can be difficult for the algorithm. In this case, it got kind of confused. The fronts and the backs of the cars are all mixed up. So here we have to be a bit more careful, manually selecting these fronts as opposed to the backs, then telling the computer that this is a type of group that we're interested in.

So we do that for a while, we skip over a little bit, and then we train the machine learning algorithm based on these couple of hundred things, and we hope that it's gotten a lot better. You can see, it's now started to fade some of these pictures out, showing us that it already is recognizing how to understand some of these itself. We can then use this concept of similar images, and using similar images, you can now see, the computer at this point is able to entirely find just the fronts of cars. So at this point, the human can tell the computer, okay, yes, you've done a good job of that.

Sometimes, of course, even at this point it's still difficult to separate out groups. In this case, even after we let the computer try to rotate this for a while, we still find that the left sides and the right sides pictures are all mixed up together. So we can again give the computer some hints, and we say, okay, try and find a projection that separates out the left sides and the right sides as much as possible using this deep learning algorithm. And giving it that hint -- ah, okay, it's been successful. It's managed to find a way of thinking about these objects that's separated out these together.

So you get the idea here. This is a case not where the human is being replaced by a computer, but where they're working together. What we're doing here is we're replacing something that used to take a team of five or six people about seven years and replacing it with something that takes 15 minutes for one person acting alone.

So this process takes about four or five iterations. You can see we now have 62 percent of our 1.5 million images classified correctly. And at this point, we can start to quite quickly grab whole big sections, check through them to make sure that there's no mistakes. Where there are mistakes, we can let the computer know about them. And using this kind of process for each of the different groups, we are now up to an 80 percent success rate in classifying the 1.5 million images. And at this point, it's just a case of finding the small number that aren't classified correctly, and trying to understand why. And using that approach, by 15 minutes we get to 97 percent classification rates.
So this kind of technique could allow us to fix a major problem, which is that there's a lack of medical expertise in the world. The World Economic Forum says that there's between a 10x and a 20x shortage of physicians in the developing world, and it would take about 300 years to train enough people to fix that problem. So imagine if we can help enhance their efficiency using these deep learning approaches?

So I'm very excited about the opportunities. I'm also concerned about the problems. The problem here is that every area in blue on this map is somewhere where services are over 80 percent of employment. What are services? These are services. These are also the exact things that computers have just learned how to do. So 80 percent of the world's employment in the developed world is stuff that computers have just learned how to do. What does that mean? Well, it'll be fine. They'll be replaced by other jobs. For example, there will be more jobs for data scientists. Well, not really. It doesn't take data scientists very long to build these things. For example, these four algorithms were all built by the same guy. So if you think, oh, it's all happened before, we've seen the results in the past of when new things come along and they get replaced by new jobs, what are these new jobs going to be? It's very hard for us to estimate this, because human performance grows at this gradual rate, but we now have a system, deep learning, that we know actually grows in capability exponentially. And we're here. So currently, we see the things around us and we say, "Oh, computers are still pretty dumb." Right? But in five years' time, computers will be off this chart. So we need to be starting to think about this capability right now.

We have seen this once before, of course. In the Industrial Revolution, we saw a step change in capability thanks to engines. The thing is, though, that after a while, things flattened out. There was social disruption, but once engines were used to generate power in all the situations, things really settled down. The Machine Learning Revolution is going to be very different from the Industrial Revolution, because the Machine Learning Revolution, it never settles down. The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before, so your previous understanding of what's possible is different.

This is already impacting us. In the last 25 years, as capital productivity has increased, labor productivity has been flat, in fact even a little bit down.

So I want us to start having this discussion now. I know that when I often tell people about this situation, people can be quite dismissive. Well, computers can't really think, they don't emote, they don't understand poetry, we don't really understand how they work. So what? Computers right now can do the things that humans spend most of their time being paid to do, so now's the time to start thinking about how we're going to adjust our social structures and economic structures to be aware of this new reality. Thank you!


2014年5月12日 星期一

從 Jinha Lee 及 Jeff Han 在人機介面創新給我們的啟示 - 平板電腦、穿載電腦在人機介面仍需要更多創新 ( From Jinha Lee And Jeff Han Innovation, Inspire Us Human interface Still Have Innovation Space For Future Development )

李鎮河:用手進到電腦中擷取畫素 (Jinha Lee: Reach into the computer and grab a pixel)

在整個電腦的發展歷程中 我們不斷地嘗試縮短 我們與數位資訊之間的距離 也就是實體世界
與螢幕中虛擬世界的距離 在虛擬世界裡 我們總是可以盡情發揮想像力 這兩者之間的距離也確實縮短了 比較近 還更再更近 現在這個距離已經縮短到 不到一毫米了 也就是觸控式螢幕厚度的距離 任何人 都可以使用電腦

但我在想 有沒有可能 我們跟電腦之間變成完全零距離 我開始想像那會是什麼樣子 首先 我做出了這個工具 它可以穿入到數位空間裡 所以當你用力地壓螢幕時 這個工具可以把你的身體 轉換成為螢幕上的像素 做設計的人可以 把他們的想法直接3D實體化 外科醫生也可以在 螢幕中的虛擬器官上做練習 我們跟電腦之間的距離 便隨著這個工具的發明而被打破了

即使如此 我們的手還是停留在螢幕外 有沒有可能直接把手伸進電腦 透過我們靈巧的雙手 直接使用這些數位資訊呢? 我在微軟的應用科學部門裡 和著我的指導教授 Cati Boulanger 重新設計出這台電腦 把鍵盤上面這個小小的空間 轉變成為一個數位化工作區 是透過一個透明顯示器 跟3D距離相機的組合 去感應你的手指跟臉 於是你就可以將你的手 從鍵盤上抬起來 並且就此進入3D的空間 並且可以直接用你的手 來抓住像素

因為視窗跟檔案 都在真實地存在這個空間裡 選取它們就好像 從書架上拿一本書一樣容易 你也可以這樣翻閱這本書 當要在某句話 或某些字上畫重點的時候 就螢幕下方的觸控板上劃過去即可 建築師可以直接用他們的雙手 來伸展或把模型轉過來 所以在這些例子裡面 我們是真的進入了數位的世界

還有沒有可能是相反過來 讓數位資訊直接跑到我們面前呢 我相信我們當中許多人 都有過在網上購物或退貨的經驗 現在你不需要再擔心了 你所看到的是一個網路實境試衣間 當系統辨識出你的身型之後 這個影像就會 透過頭戴式或透明顯示器穿戴在你身上

延伸這個想法,我開始想 有沒有可能不光是 在三度空間裡用肉眼看到像素 而是讓像素具體化 讓我們可以摸得到 跟感覺得到它呢 這樣的未來會是什麼樣子? 在麻省理工學院的媒體實驗室裡 我跟我的指導教授 Hiroshi Ishii 以及我的合作夥伴 Rehmi Post 一起研發出這一個具像化的畫素 這組模組中的球狀磁鐵 就好像是現實生活中的3D像素 也就是說無論是 使用電腦或人用手 都可以在這個小小的三度空間裡 自由且同步地移動它 基本上我們所做的 就是拿掉重力這個因素 透過結合磁浮 及力學效應 再加上感應技術 來讓它動起來 並透過程式來將物件數位化 就讓它脫離了時間空間的限制 也就是說現在 人的動作可以被紀錄並重新播放 並且在現實生活中被永久保存下來 所以現在芭蕾舞 也可以進行遠距教學 麥可喬丹的傳奇飛人之姿 也可以真實地一再重現 學生也可以使用它 來學習像行星運動或物理學 這類比較複雜的概念 相較於一般的電腦螢幕或教科書 這是個有形有體的真實經驗 你可以觸摸跟實際感受 這會令人印象深刻 更令人興奮的是 不是只是將電腦裡的東西實體化 而是當我們生活週遭 很多東西開始被程式化之後 我們的日常生活也將會隨之改變

就如各位所看到的 數位資訊將不只提供知識或想法 而是直接在我們面前活生生地呈現出來 就好像是我們生活週遭的一部份 我們不再需要將自己 從這個世界中抽離出來

我們今天是從這層隔閡開始講起 但如果這層隔閡不再復存 那唯一會限制我們的 就只剩下自己的想像力了

謝謝



Jeff Han: The radical promise of the multi-touch interface

我真的,真的非常興奮,今天能夠在這邊, 因為,我將要向你們展示一些,剛剛完成研發的技術, 真的,而且我非常高興,你們能夠成為世界上,親眼目睹這個技術,最早的一群人之一, 因為我真的,真的認為這將會改變, 真真正正的改變,從今以後我們與機器互動的方式。

現在,這邊有一個背投影式繪圖桌。它大概有 36 吋寬, 而且它配備有多點觸控感應器。平日所見的一般觸控感應器, 像是自動櫃台,或是互動式白板, 在同一時間,只能辨識一個接觸點。 而這東西,能允許你同時間,具有多點控制。 這些觸控點,可以是從我的雙手而來,我可以單手使用多指, 或者,如果我想要,我可以一次使用全部十根手指。 你知道的,就像這樣。

雖然多點觸控不是全然嶄新的概念。 我是說,像是著名的 Bill Buxton 早在 80 年代就已經開始嘗試這個概念。 然而,我在這裡所建造的這個方法,是具有高解析度、 低價,並且也許最重要的是,其硬體可以非常簡單的調整尺寸。 因此,這個技術,如我所言,並不是現在你所見到,最令人興奮的東西, 可能除了它與眾不同的低價位以外。 這邊真正令人感興趣的是,你能夠用它來做什麼事? 以及你將能夠利用它,所創造出來的介面。現在讓我們來看看。
Tablet shipment continue to grow

舉例來說,這邊我們有一個岩漿燈軟體。你可以看到, 我可以使用我的雙手,將這些斑點經由擠壓,而使它們變成一團。 我可以像這樣將系統加熱, 或者我可以用我的兩跟手指,將它們分開。 這是完全直覺性的操作,你將不會需要使用手冊。 操控介面似乎就這樣憑空消失了。 這一開始,是我們實驗室的一個博士班學生,所創造的一個類似螢幕保護程式的軟體 他的名字叫做 Ilya Rosenberg. 但是我認為,這個軟體真正的價值,在這邊顯現了。

Wearable shipment forecast
多點觸控感應器厲害的地方在這邊,像這樣, 我可以使用許多手指來操控這個, 但是當然,多點觸控本質上也意味著多使用者。 所以克里斯 (Chris) 可以到台上來,與岩漿的另一部份互動, 在此同時,我在這邊玩弄這一部份。你可以將之想像為一種新的雕塑工具, 在這邊我將部份加熱,增加它的可塑性, 然後讓它冷卻、硬化到某個程度。 谷歌 (Google) 在他們大廳應該要有個像這樣的東西。(笑聲)

我將要向你們展示某樣東西,一個比較實際的應用範例,等它載入完成。 這是一個攝影師的燈箱工具應用程式。 再一次地,我可以使用我的兩隻手,來與這些照片互動並移動它們。 但是更酷的是,如果我用兩跟手指, 事實上我可以抓取一張照片,並且非常簡單的將之放大,就像這樣。 我可以任意地拖移、縮放並旋轉這張照片。 我可以使用我的雙手掌來這樣做, 或者,我可以只用我任何一隻手的兩隻手指一起來完成。 如果我抓住整個畫板,我可以做同樣的事,將之放大。 我可以同時操作,現在我一邊抓住這個不放, 然後握住另外一張,把它像這樣放大。

同樣的,在這裡我們也看不到操控介面。 不需要說明書。它的操作結果完全如你所預期, 特別是如果你以前沒有接觸過電腦的話。 現在,如果你想創造一個事業,像是一百美元的筆電, 其實,我對於我們將要向一個全新世代的族群介紹 傳統電腦的視窗滑鼠點擊介面這個想法,有所保留。 多點觸控才是我認為從今爾後,我們所應該與機器互動的真正方法。(掌聲) 多點觸控才是我認為從今爾後,我們所應該與機器互動的真正方法。(掌聲) 當然,我可以在這邊叫出鍵盤。 我可以將我打的字帶出來,將之放在這邊。 很明顯的,這與一般標準鍵盤並無不同, 但是當然,我可以重新縮放這個鍵盤的大小,讓它適合我的雙手使用。 這是非常重要的,因為在今日的科技下,沒有理由 我們應該去適應物理性的裝置。 那將會導致不好的後果,例如:重複性勞損。 今日我們有許多這麼好的科技, 這些操控介面應該開始來適應我們。 直到今日,真正改善我們與介面互動的應用還太少。 直到今日,真正改善我們與介面互動的應用還太少。 這個虛擬鍵盤,事實上可能是一個錯誤的發展方向。 你可以想像,在未來,當我們開始發展這種技術之後, 一個鍵盤,當你移開手的時候,也會自動漂移開來, 而且會非常聰明地預測,你將要用你的手打擊那一顆按鍵。 因此~ 再問一次,很棒吧?

聽眾:你的實驗室在哪裡?

韓傑夫:我是紐約市紐約大學的研究科學家。

這是另一種應用程式的範例。我可以創造出這些小毛球。 它會記憶我所做過的點擊。當然,我可以使用兩手操作。 你會注意到,它具有壓感功能。 但是,真正棒的在這邊,我已經向你們示範過兩指操控手勢, 它能夠讓你很快的放大。因為你不需要事先切換到手掌工具 或者放大鏡工具; 你可以連續地在多個不同的比例上,即時的創造東西,一次完成。 我可以在這邊建造大東西,我也可以回去,非常迅速的回去 回到我一開始的地方,然後甚至在這邊建造更小的東西。

這將會變得非常重要,當我們一旦開始從事像是 資料視覺化的工作。舉例來說,我想我們都非常喜歡 Hans Rosling 的演說, 而且他真正的,著重在強調一個我也已經思考許久的事實, 我們都擁有這些了不起的資料,但是,基於某些原因,它們只是被擺在那邊。 我們並沒有真正的去使用它。我認為其中一個原因就是, 藉由圖像、視覺化和參考工具,可以幫助我們處理這些資料。 但是,很大的一部分,我也著重在開始能夠創建更好的使用者介面, 能夠鑽研深入像這樣的資料,但同時仍能保持對整體的宏觀性。

現在,讓我向你們展示另一個應用程式。這個程式叫做「世界風」。 是太空總署 (NASA) 所研發的。它就像我們都見過的谷歌地球 (Google Earth); 這就好像是它的開源碼版本。有附加檔能夠載入 太空總署經年蒐集的各種資料集。 但是,如你所見,我可以使用同樣的兩指手勢 非常迅速的下潛、進入地球。又一次的,我們看不到操控介面的存在。 這真的能讓所有人,感覺非常地融入那個環境,而且操作就如同你所預期一般, 你能體會嗎?再一次,在這邊你看不到操作介面。介面就這樣消失了。 我可以切換到不同的資料瀏覽。這就是這個應用程式厲害的地方。 就像這樣。太空總署非常的酷。它們有這些超光譜影像 這些影像是人工成色的,對於決定植物的繁茂程度非常有幫助。讓我們回到剛剛那裡。

地圖軟體的偉大之處在於, 它不只是 2D 平面,它也可以是 3D 立體影像。所以再一次,應用多點觸控介面, 你可以使用像這樣的手勢,所以你可以使畫面像這樣傾斜, 你知道。不只是受限於簡單的 2D 平面攀移與移動。 我們已經研發出了這些手勢,像這樣,只要放入你的兩跟手指, 它界定了傾斜的軸線,如此這般我就可以向上或向下任意傾斜。 這是我們在這邊,剛剛想出來的點子, 你知道嗎?也許這不是這樣做最好的方法, 但是使用這種介面,你可以做許多很有趣的事情。 就算你什麼都不做,只是玩玩,也會感覺愉快。(笑聲)

所以,最後一件我想要向你們展示的東西是, 你知道,我確信我們都可以想到很多,你可以利用這個東西所做的 娛樂方面的應用。 我對於我們可以利用這東西,所做的創意性應用更感興趣。 現在,這邊有一個簡單的程式,我可以畫曲線。 當我將曲線封閉起來的時候,它就變成一個人偶。 但是有趣的事情是,我可以增加控制點。 然後我可以用我雙手的手指同時操控它們。 你會注意到它是怎麼做的。 這就像是操控木偶一般,這邊我可以使用 我的十跟手指去畫出並做出玩偶。

事實上,在這表象之下需要很多數學運算, 然後它才能控制這些圖案,並正確的反應。 我的意思是,在這邊這個能夠操控圖案的技術, 並使用多個操控點,事實上是屬於一種尖端科技。 這個技術去年才在計算機圖形學會議上公開, 但這是我真正喜好研究領域的良好範例。 所有這些需要使事情做「對」,背後的電腦運算。 直覺性的事情。做如你所預期一模一樣的事。

多點觸控互動研究,現在在人機介面領域非常地活躍。 我不是唯一一個在做這方面研究的,還有很多其他的人也在這領域。 而這種技術,將會讓更多人加入這個領域的研究, 我真的非常期待,跟在場各位接下來幾天的互動 看看這技術,將能如何應用在你們所處的領域。 謝謝大家。


評論

Enhanced by Zemanta

2014年4月11日 星期五

美國石油與頁岩天然氣產能創新高,造成未來2016 ~ 2025 影響 - 俄國財政轉弱、美元轉強、天然氣發電效率上升、電動車產業成新主流 ( U.S. oil and shale gas strong production, causing future impact from 2016 to 2025 )

頁岩氣開採進步 美估2037年不需進口原油
美國頁岩天然氣超有競爭力, 將影響整個
能源使用比重, 甚致將大幅讓美國能源使用
方式改變

美國石油與天然氣探勘井數量年增 80 座,再創新高,根據美國能源資訊署 (EIA) 報告指出,北達科他州與德州油田開採量增加,預估 2037 年美國無再須進口原油。《彭博社》報導,能源資訊署發言人 John Krohn 表示,這是該署在年度能源報告中,「首度」預期對石油進口消費量,可在 23 年後「達到零」。

此份預估報告的難度,在於分析師必須精確算出幾千英呎下,到底容納多少原油、採收技術如何快速進步,以及油價開採是否符合成本。

《FWBP》報導指出,能源顧問 Schork Group Inc 總裁 Stephen Schork 認為,10 年前美國天然氣來源來自進口,「但目前看來用量已可供出口」,在未來幾年內,狀況會如何改變,還有待討論。

根據美國能源資訊署樂觀報告指出,美國國內原油平均日產量 20 年後將增至 1300 萬桶,但若開採技術沒有重大突破,保守預估也會有 1000 萬桶。目前美國石油進口量已從 2006 年的 1300 萬桶,減少至 500 萬桶,一切都得歸功於頁岩氣開採技術大幅進步。

美國頁岩油及頁岩氣對世界之影響越來越大

頁岩油及頁岩氣的廣泛開採和大規模應用,將對現有的以石油和煤炭為主體的能源消費結構帶來巨大衝擊和改變。

美國油消耗下降但美國原油產能增加

  頁岩油革命正在重新界定全球能源格局,正如國際能源署在最新的《世界能源展望報告》中所說的那樣,美國能源開發具有深遠意義,北美以外地區和整個能源行業都將能夠感受到其影響。隨著非傳統油氣產量的大增,美國能源產量的增長將加速國際石油貿易轉向,對傳統能源生產國以及由此產生的定價機制都會產生壓力。未來我們將一一感受這些變化。

2017年美國將成最大產油國

這麼有競爭力之美國天然氣未來將出口
國際能源署去年11月發布的報告預計,美國將在2017年取代沙特成為全球最大的產油國,該組織還預測美國距離實現能源自給自足的目標已經很近,而這在之前是不可想象的。國際能源署的此番預測與其之前發布的報告形成鮮明對比,此前的報告稱,沙特將保持全球最大產油國地位直至2035年。

  國際能源署預計,美國石油進口將持續下降,北美將在2030年左右成為石油凈出口地區,而美國將在2035年左右基本實現能源自給自足。“美國目前大約20%的能源需求依靠進口,但以凈進口量計算幾乎達到自給自足的程度,這與其他多數能源進口國呈現的趨勢迥然不同。”

  這份報告表示,美國到2015年將以較大的優勢超越俄羅斯,成為全球最大的天然氣生產國,到2017年成為全球最大的石油生產國。隨著國內的廉價供應激發工業和發電行業的需求,美國到2035年對天然氣的依賴將超過石油或煤炭。( 註:一旦, 美國石油及天然氣都成為全世界主控者, 蘇俄能源出口將受打擊經濟也受害, 那時蘇俄又要進口許多糧食, 我懷疑俄侵略烏克蘭極可能是一種戰略, 因為蘇俄許多糧食穀物進口來自烏克蘭 )。

未來交通電力時代將因廉價之電力、電池及充電技術而啟動

Elon Musk: The Future Is Fully Electric 

Interviewed at The New York Times's Dealbook Conference today by Andrew Ross Sorkin, who repeatedly questioned Musk, the CEO of Tesla Motors, about three recent fires in Tesla automobiles, Musk largely shrugged off headlines on the accidents as "misleading" and the number as statistically insignificant. He said no Tesla recall is in the near future.

In the more distant future, he had broader predictions.

"I feel confident in predicting the long term that all transport will be electronic," said Musk, who is also founder and CTO of SpaceX, the space rocket company that's contracting with NASA. He paused slightly. "With the ironic exception of rockets."

Musk says the future of the country's ground transportation will be fully electric, powered by efficient batteries, and that "we are going to look back on this era like we do on the steam engine."

"It's quaint," he said. "We should have a few of them around in a museum somewhere, but not drive them."

Whatever happened to flying cars?

"I kind of like the idea of flying cars on the one hand, but it may not be what people want," he said, adding that noise pollution could be an issue--as might interfering with sight-lines of city skylines.

Extending electric transportation to the skies, though, might be possible, and--yes--Musk even has a plan for it.

major blocking issue to e-Car is battery, charging time and cost
"I do think there's a lot of possibility in creating a vertical-takeoff supersonic transport jet. It could come from a startup," Musk said, admitting that if he has another company in the future, "which will happen no time soon," he'd be open to building electric supersonic aircrafts. He even has a design in mind--inspired by the Concorde.

( 註:美國大部份的電力來自於燃燒石油燃料。 那麼需要插電的電動車如何產生效益呢? 如果我們在通用電力公司(General Electric) 現代化的天然氣發電渦輪燃燒, 如果我們在通用電力公司(General Electric) 現代化的天然氣發電渦輪燃燒, 我們會得到百分之六十的能源效率。 如果同樣的能源我們在汽車的內燃機燃燒, 我們只能得到百分之二十的能源效率。 而原因在於,我們在發電廠中可以提供 非常多提升燃料價值的方法。 而原因在於,我們在發電廠中可以提供 非常多提升燃料價值的方法。 而原因在於,我們在發電廠中可以提供 非常多提升燃料價值的方法。 我們還可以將浪費掉的熱能 重新送入蒸氣渦輪並成為二度電力的來源。 重新送入蒸氣渦輪並成為二度電力的來源。 因此,當我們採取了任何降低(能量)傳輸的所有手段, 即便是使用相同來源的燃料,在發電廠產電燃後 用來充電動車,我們都可以得到兩倍以上的好處。 )

Volvo Develops Battery Technology Built Into Body Panels ( 新的節能車蓋念 )

A research project funded by the European Union has developed a revolutionary, lightweight structural energy storage component that could be used in future electrified vehicles.  There were 8 major participants with Imperial College London ICL United Kingdom as project leader. The other participant were: Swedish companies Volvo Car Group, Swerea Sicomp AB, ETC Battery and FuelCells, and Chalmers (Swedish Hybrid Centre), Bundesanstalt für Materialforschung und-prüfung BAM, of Germany, Greek company Inasco,  Cytec Industries also of the United Kingdom, and Nanocyl, of Belgium. Also known as NCYL.

The battery components are moulded from materials consisting of carbon fiber in a polymer resin, nano-structured batteries and super capacitors. Volvo says the result is an eco-friendly and cost effective structure that will substantially cut vehicle weight and volume. In fact, by completely substituting an electric car’s existing components with the new material, overall vehicle weight will be reduced by more than 15 percent.

According to Volvo, reinforced carbon fibers are first sandwiched into laminate layers with the new battery. The laminate is then shaped and cured in an oven to set and harden. The super capacitors are integrated within the component skin. This material can then be used around the vehicle, replacing existing components such as door panels and trunk lids to store and charge energy.

Doors, fenders and trunk lids made of the laminate actually server a dual purpose. They’re lighter and save volume and weight. At the same time they function as electrically powered storage components and have the potential to replace standard batteries currently used in cars. The project promises to make conventional batteries a thing of the past.

Under the hood, Volvo wanted to show that the plenum replacement bar is not only capable of replacing a 12 volt system; it can also save more than 50 percent in weight. This new technology could be applied to both electric and standard cars and used by other manufacturers.


汽車大廠福特近年來在汽車科技上不斷嘗試突破,先前《科技新報》報導福特試驗以光雷達(LiDAR)自動駕駛原型車,在 1 月的 CES 大展上,福特又有創新之舉,與專供高轉換率太陽能電池的 SunPower 合作,推出太陽能充電電動車的原型車。

以往我們常看到各種太陽能車的比賽,但車體都要極度的輕量化,只能塞進一個人,而車頂則要盡量延展,增加受光面積,而變得奇形怪狀,但是福特的 C-MAX Solar Energi 太陽能充電概念原型車,卻是跟一般汽車沒兩樣,只有車頂裝上 SunPower 的 X21 高效能太陽能電池。太陽能轉換率再高,這樣小的面積不也是聊勝於無嗎?

福特想到一個聰明的解決辦法,就是推出聚光車棚,與一般的棚子構造一樣,只不過棚面換成聚光透明材質,因此成本不高,當車子停在聚光車棚底下,車棚能把更大面積的太陽光聚光到較小的車頂太陽能電池上,因此提高了充電效率,而福特還內建自動駕駛系統,讓車子能自動對準陽光聚焦處,如此一來,太陽能充電 6 小時,可以行駛 21 英哩,約 33.8 公里
Sun energy green car will happen before 2016

C-MAX Solar Energi 是改裝自 C-MAX Energi 插電式油電混合車,因此萬一太陽下山車子又沒電了,還是可靠汽油開回家,但是若未來廣設太陽能充電聚光車棚,停車購物時,就停在聚光車棚底下充電,不僅省油,還沒有電費開支,福特估計,這樣一來,可以減少充電的電力消費高達 75%

而聚光車棚也比一般充電站更容易推廣,目前電動車面臨充電站不足的困境,但要廣設充電站,則受到許多阻礙,包括商業上的困難,如擁有 12450 座充電站的 ECOtality 於 2013 年 10 月破產,之後由 Car Charging Group 接手,NRG Energy 要廣設充電站的計畫也遇上許多障礙,原本計畫 2013 年底裝設超過 1000 座充電站,結果只裝了 110 座,達成率才 10%。

對汽車大廠來說,缺乏充電站自然打擊電動車的發展,但是充電站有高額的固定投資問題,與電網的連結與收費等等考量,要推動不是一時三刻能辦到,與其等待,不如乾脆跳過這個環節。

納米炭纖素高能蓄電池發展前景看好

    一種采用超微細材料的新型電池,已剛剛問世兩年就受到全球的高度重視,那就是納米碳纖素電池。這種電池采用納米碳管制成纖維,再制相應的編織布,經處理後可制電池的正、負極板。電鋅液為無水有機高分子電解液,能制成容量大、單體電壓高的電池(3.8V),而比能量可達230wh/kg以上。充放電次數1000次至1200次,采用這種碳纖素材料的表面積比可達2000平方米/克,所以納米碳纖素電池體積特別小,只有普通鉛酸電池的1/16,重量是其的1/7—1/10,而比能量是近世出現的锂電池的近兩倍,而且取材方便。
US Dollar Index will surge up due to US energy import decreasing and
deficit decrease

  納米碳纖素電池體積小、重量輕、能量大,可廣泛應用于微電子學上,外徑為1mm、長度3mm的貼片電池可廣泛應用于電子手表、無線電收音機、電視、電子儀器、通訊、手機BP機上,可替代許多電解電容器,而體積大大縮小。美國近期制成的能放在人體血管裏的超微型馬達,裝上納米碳電池,可疏通人體血管裏的腦血栓。一只微型納米碳電池可做成0.6mm大小,和計算器上的相結合,就成為太陽能儲備電池。可以這樣預計,納米級電池在電子上的應用兩年內可達上萬億美元。 ( 註:目前近量產納米碳纖素電池體積小、重量輕、能量大約鋰離子電池三倍容量,一個 2000 mAh 納米碳纖素電池充電電流可高達 15A ~ 20A,因此最快可在6分鐘充飽,將使電動車超越汽車成為全球最大產業 )

分析