顯示具有 英文短影片 標籤的文章。 顯示所有文章
顯示具有 英文短影片 標籤的文章。 顯示所有文章

2015年1月25日 星期日

微軟 HoloLens:科幻虛擬實境裝置將啟動3D全像投影贏Google Glass成為下一代很重要穿載電腦 ( Microsoft HoloLens: A science fiction virtual reality device with 3D holographic projection will win Google Glass has become the next generation wearable computer )

微軟 HoloLens:超科幻的虛擬實境裝置

這個 HoloLens 是一個功能頗有未來感的虛擬實境裝置,它除了能在現實的場景投射各種虛擬
影像,更厲害的是可透強大即時運算效能,讓佩帶者能和虛擬影像直接互動。不論是要放大、縮小、移動物體,或是幫虛擬物體上色等動作通通都能辦到,讓你就像電影《鋼鐵人》中的東尼史塔克一樣無所不能!

HoloLens 是微軟首款虛擬實境裝置,它完全不需要其他配件、外接線路即可獨立運。HoloLens 主要是以 Windows 10 為基礎,並搭配微軟的 Windows Holographic 技術,同時也內建強大的處裡器、繪圖晶片、多種感應器、全像高解析度鏡頭和全像處理單元 HPU ( Holographic Processing Unit),不僅可即時偵測使用者的動作和周圍環境,同時也會即時處理從感應器即時接收到百萬位元組資料,進而讓使用者可直接和和虛擬物件互動,不僅可讓設計師可更直覺的去檢視、修改自己的產品,也能運用在家庭娛樂或是科學研究等多種領域上。

雖然已經發表了 HoloLens,但微軟並不打算藏私,而是選擇和其他虛擬實境裝置的製造商一起分享 Windows Holographic 相關技術。微軟表示所有 Windows 10 系統中都將內建 Holograms API,同時微軟也發表了用來製作 Holograms 內容的 HoloStudio 軟體,讓你可透過虛擬、可互動的工具軟體,自由設計出各種可互動的虛擬實境介面或物體,甚至還可支援把你設計出來的物體,透過 3D 列印技術直接列印出來。

微軟在Windows 10消費端應用說明活動中,同時揭曉可獨立運作頭戴式裝置HoloLens,藉此示範結合自然人機操作介面、虛擬實境顯示與Windows 10等應用模式,並且讓合作廠商也能以此設計不同形式裝置。而在會後接受訪談中,微軟Xbox部門負責人Phil Spencer表示HoloLens定位並非用於對抗Oculus、Sony或三星旗下推出虛擬顯示頭戴裝置,強調雖然HoloLens具備相同或類似功能,但重心還是放在其可獨立運作用於各類領域的使用特性。
在接受訪談中,微軟Xbox部門負責人Phil Spencer表示HoloLens的產品定位,並非用於對抗Oculus、Sony或三星旗下推出虛擬顯示頭戴裝置。雖然HoloLens本身也具備連接或串流源自Windows或Xbox One的影像內容,並且也支援虛擬實境影像顯示,但其重心依然著重可獨立運作用於各類領域的使用特性,並非僅只是定位在單一顯示設備配件。
HoloLens是微軟以前瞻性思維所打造產品,本身除整合虛擬實境顯示、對應手勢及語音等人機操作模式,以及提供TB等級的即時運算量等特性,同時也呈現Windows 10於穿戴裝置應用的可能性,微軟也表示目前此項產品仍為前期發展階段,未來將持續拓展實際可應用領域。

在HoloLens設計部分,微軟將採取獨立運作、沒有任何連接線,甚至在處理器、繪圖器等運作元件外,更額外加入負責全息影像處理的HPU (Holographic Processing Unit),以及Windows Holographic技術,同時支援Universal Windows Apps。

Google Glass 原理及功能定義越來越清楚 - 但要停產?

Google表示,現在開發人員可將其他服務的語音命令添加到主要選單中,最先示範的兩項語音命令是張貼更新(Post an update)與作筆記(Take a note),前者目前支援社交程式Path,後者支援Evernote。Glass用戶必須先說出「Ok,Glass」以啟用語音命令功能,張貼更新與作筆記只是個開始,未來Google將會允許Glass用戶利用語音來執行各種服務。

此次Google也改善了Glass平台上的重點服務─Google Now,Google Now標榜能在對的時間與地點展示對的資訊,有各種類別的卡片,諸如交通、天氣與運動賽事的即時成績等,現在則新增電影、活動與緊急通知卡片,可顯示附近電影院正上映的影片,亦可提醒使用者音樂會或有訂位的晚餐時間快到了,或是在天災發生時進行緊急通知。

至於新的影音播放工具則提供暫停、播放與快轉等新功能。

Google期望Google Glass可於今年正式上市,目前估計約有數千人正使用測試版的Google Glass Explorer,測試版售價為1500美元。

Google Glass 團隊宣稱「畢業了」下週開始停產停售?

下週一開始,也就是 1/19 進行 2 年的 Google Glass Explorer 計劃將停止,Google Glass 將停售停產不再接受訂單。

失敗了?智慧眼鏡沒前途?

好啦,都不是。在 Google Glass 的官方 Google+ 帳號上,他們宣布「我們從『Google 的 X 實驗室』畢業了!」並感謝每個幫助他們嘗試、試驗 Google Glass 功能的先驅者們。

沒錯,Google Glass 將停止銷售給一般消費者(合作業者還是會繼續),並且從 Google X 實驗室旗下搬出,移交智慧家庭部門 Nest Labs 執行長 Tony Fadell 督導 。

Google 也澄清雖然目前看來一切暫停,但他們並沒有要放棄 Google Glass,反之他們還要致力於推出能讓一般消費者適用的 Google Glass;只是時間還不確定。而 Google 的這項舉動被多數媒體認為代表著「Google Glass 長大了」下一步就是商業化。

不管如何,很期待下次再出現的 Google Glass 會長怎樣?是不是克服了一直以來大家詬病的隱私、安全、設計、成本等問題?

分析
  • Google Glass 到目前仍是過於耗電,售價與功能不成吸引力因子,相較之下微軟 HoloLens 功能比Google Glass 更實用,而且,只要加上一個相機就可包含Google Glass 功能,再來就看微軟 HoloLens售價。
  • 微軟 HoloLens 也將面臨耗電問題,因為所需要之耗電全像處理單元 HPU ( Holographic Processing Unit)將比 Google Glass 更耗電。

2015年1月7日 星期三

從機器學習到人類社會結構的革命 - 20年內巨型雲端機器人將改變人類社會 ( From machine learning to human social structures Revolution - the giant cloud robot will change human society within 20 years )

It used to be that if you wanted to get a computer to do something new, you would have to program it. Now, programming, for those of you here that haven't done it yourself, requires laying out in excruciating detail every single step that you want the computer to do in order to achieve your goal. Now, if you want to do something that you don't know how to do yourself, then this is going to be a great challenge.

So this was the challenge faced by this man, Arthur Samuel. In 1956, he wanted to get this computer to be able to beat him at checkers. How can you write a program, lay out in excruciating detail, how to be better than you at checkers? So he came up with an idea: he had the computer play against itself thousands of times and learn how to play checkers. And indeed it worked, and in fact, by 1962, this computer had beaten the Connecticut state champion.

So Arthur Samuel was the father of machine learning, and I have a great debt to him, because I am a machine learning practitioner. I was the president of Kaggle, a community of over 200,000 machine learning practictioners. Kaggle puts up competitions to try and get them to solve previously unsolved problems, and it's been successful hundreds of times. So from this vantage point, I was able to find out a lot about what machine learning can do in the past, can do today, and what it could do in the future. Perhaps the first big success of machine learning commercially was Google. Google showed that it is possible to find information by using a computer algorithm, and this algorithm is based on machine learning. Since that time, there have been many commercial successes of machine learning. Companies like Amazon and Netflix use machine learning to suggest products that you might like to buy, movies that you might like to watch. Sometimes, it's almost creepy. Companies like LinkedIn and Facebook sometimes will tell you about who your friends might be and you have no idea how it did it, and this is because it's using the power of machine learning. These are algorithms that have learned how to do this from data rather than being programmed by hand.

This is also how IBM was successful in getting Watson to beat the two world champions at "Jeopardy," answering incredibly subtle and complex questions like this one. ["The ancient 'Lion of Nimrud' went missing from this city's national museum in 2003 (along with a lot of other stuff)"] This is also why we are now able to see the first self-driving cars. If you want to be able to tell the difference between, say, a tree and a pedestrian, well, that's pretty important. We don't know how to write those programs by hand, but with machine learning, this is now possible. And in fact, this car has driven over a million miles without any accidents on regular roads.

So we now know that computers can learn, and computers can learn to do things that we actually sometimes don't know how to do ourselves, or maybe can do them better than us. One of the most amazing examples I've seen of machine learning happened on a project that I ran at Kaggle where a team run by a guy called Geoffrey Hinton from the University of Toronto won a competition for automatic drug discovery. Now, what was extraordinary here is not just that they beat all of the algorithms developed by Merck or the international academic community, but nobody on the team had any background in chemistry or biology or life sciences, and they did it in two weeks. How did they do this? They used an extraordinary algorithm called deep learning. So important was this that in fact the success was covered in The New York Times in a front page article a few weeks later. This is Geoffrey Hinton here on the left-hand side. Deep learning is an algorithm inspired by how the human brain works, and as a result it's an algorithm which has no theoretical limitations on what it can do. The more data you give it and the more computation time you give it, the better it gets.

The New York Times also showed in this article another extraordinary result of deep learning which I'm going to show you now. It shows that computers can listen and understand.

(Video) Richard Rashid: Now, the last step that I want to be able to take in this process is to actually speak to you in Chinese. Now the key thing there is, we've been able to take a large amount of information from many Chinese speakers and produce a text-to-speech system that takes Chinese text and converts it into Chinese language, and then we've taken an hour or so of my own voice and we've used that to modulate the standard text-to-speech system so that it would sound like me. Again, the result's not perfect. There are in fact quite a few errors. (In Chinese) (Applause) There's much work to be done in this area. (In Chinese)

Jeremy Howard: Well, that was at a machine learning conference in China. It's not often, actually, at academic conferences that you do hear spontaneous applause, although of course sometimes at TEDx conferences, feel free. Everything you saw there was happening with deep learning. (Applause) Thank you. The transcription in English was deep learning. The translation to Chinese and the text in the top right, deep learning, and the construction of the voice was deep learning as well.

So deep learning is this extraordinary thing. It's a single algorithm that can seem to do almost anything, and I discovered that a year earlier, it had also learned to see. In this obscure competition from Germany called the German Traffic Sign Recognition Benchmark, deep learning had learned to recognize traffic signs like this one. Not only could it recognize the traffic signs better than any other algorithm, the leaderboard actually showed it was better than people, about twice as good as people. So by 2011, we had the first example of computers that can see better than people. Since that time, a lot has happened. In 2012, Google announced that they had a deep learning algorithm watch YouTube videos and crunched the data on 16,000 computers for a month, and the computer independently learned about concepts such as people and cats just by watching the videos. This is much like the way that humans learn. Humans don't learn by being told what they see, but by learning for themselves what these things are. Also in 2012, Geoffrey Hinton, who we saw earlier, won the very popular ImageNet competition, looking to try to figure out from one and a half million images what they're pictures of. As of 2014, we're now down to a six percent error rate in image recognition. This is better than people, again.

So machines really are doing an extraordinarily good job of this, and it is now being used in industry. For example, Google announced last year that they had mapped every single location in France in two hours, and the way they did it was that they fed street view images into a deep learning algorithm to recognize and read street numbers. Imagine how long it would have taken before: dozens of people, many years. This is also happening in China. Baidu is kind of the Chinese Google, I guess, and what you see here in the top left is an example of a picture that I uploaded to Baidu's deep learning system, and underneath you can see that the system has understood what that picture is and found similar images. The similar images actually have similar backgrounds, similar directions of the faces, even some with their tongue out. This is not clearly looking at the text of a web page. All I uploaded was an image. So we now have computers which really understand what they see and can therefore search databases of hundreds of millions of images in real time.

So what does it mean now that computers can see? Well, it's not just that computers can see. In fact, deep learning has done more than that. Complex, nuanced sentences like this one are now understandable with deep learning algorithms. As you can see here, this Stanford-based system showing the red dot at the top has figured out that this sentence is expressing negative sentiment. Deep learning now in fact is near human performance at understanding what sentences are about and what it is saying about those things. Also, deep learning has been used to read Chinese, again at about native Chinese speaker level. This algorithm developed out of Switzerland by people, none of whom speak or understand any Chinese. As I say, using deep learning is about the best system in the world for this, even compared to native human understanding.

This is a system that we put together at my company which shows putting all this stuff together. These are pictures which have no text attached, and as I'm typing in here sentences, in real time it's understanding these pictures and figuring out what they're about and finding pictures that are similar to the text that I'm writing. So you can see, it's actually understanding my sentences and actually understanding these pictures. I know that you've seen something like this on Google, where you can type in things and it will show you pictures, but actually what it's doing is it's searching the webpage for the text. This is very different from actually understanding the images. This is something that computers have only been able to do for the first time in the last few months.

So we can see now that computers can not only see but they can also read, and, of course, we've shown that they can understand what they hear. Perhaps not surprising now that I'm going to tell you they can write. Here is some text that I generated using a deep learning algorithm yesterday. And here is some text that an algorithm out of Stanford generated. Each of these sentences was generated by a deep learning algorithm to describe each of those pictures. This algorithm before has never seen a man in a black shirt playing a guitar. It's seen a man before, it's seen black before, it's seen a guitar before, but it has independently generated this novel description of this picture. We're still not quite at human performance here, but we're close. In tests, humans prefer the computer-generated caption one out of four times. Now this system is now only two weeks old, so probably within the next year, the computer algorithm will be well past human performance at the rate things are going. So computers can also write.

So we put all this together and it leads to very exciting opportunities. For example, in medicine, a team in Boston announced that they had discovered dozens of new clinically relevant features of tumors which help doctors make a prognosis of a cancer. Very similarly, in Stanford, a group there announced that, looking at tissues under magnification, they've developed a machine learning-based system which in fact is better than human pathologists at predicting survival rates for cancer sufferers. In both of these cases, not only were the predictions more accurate, but they generated new insightful science. In the radiology case, they were new clinical indicators that humans can understand. In this pathology case, the computer system actually discovered that the cells around the cancer are as important as the cancer cells themselves in making a diagnosis. This is the opposite of what pathologists had been taught for decades. In each of those two cases, they were systems developed by a combination of medical experts and machine learning experts, but as of last year, we're now beyond that too. This is an example of identifying cancerous areas of human tissue under a microscope. The system being shown here can identify those areas more accurately, or about as accurately, as human pathologists, but was built entirely with deep learning using no medical expertise by people who have no background in the field. Similarly, here, this neuron segmentation. We can now segment neurons about as accurately as humans can, but this system was developed with deep learning using people with no previous background in medicine.

So myself, as somebody with no previous background in medicine, I seem to be entirely well qualified to start a new medical company, which I did. I was kind of terrified of doing it, but the theory seemed to suggest that it ought to be possible to do very useful medicine using just these data analytic techniques. And thankfully, the feedback has been fantastic, not just from the media but from the medical community, who have been very supportive. The theory is that we can take the middle part of the medical process and turn that into data analysis as much as possible, leaving doctors to do what they're best at. I want to give you an example. It now takes us about 15 minutes to generate a new medical diagnostic test and I'll show you that in real time now, but I've compressed it down to three minutes by cutting some pieces out. Rather than showing you creating a medical diagnostic test, I'm going to show you a diagnostic test of car images, because that's something we can all understand.

So here we're starting with about 1.5 million car images, and I want to create something that can split them into the angle of the photo that's being taken. So these images are entirely unlabeled, so I have to start from scratch. With our deep learning algorithm, it can automatically identify areas of structure in these images. So the nice thing is that the human and the computer can now work together. So the human, as you can see here, is telling the computer about areas of interest which it wants the computer then to try and use to improve its algorithm. Now, these deep learning systems actually are in 16,000-dimensional space, so you can see here the computer rotating this through that space, trying to find new areas of structure. And when it does so successfully, the human who is driving it can then point out the areas that are interesting. So here, the computer has successfully found areas, for example, angles. So as we go through this process, we're gradually telling the computer more and more about the kinds of structures we're looking for. You can imagine in a diagnostic test this would be a pathologist identifying areas of pathosis, for example, or a radiologist indicating potentially troublesome nodules. And sometimes it can be difficult for the algorithm. In this case, it got kind of confused. The fronts and the backs of the cars are all mixed up. So here we have to be a bit more careful, manually selecting these fronts as opposed to the backs, then telling the computer that this is a type of group that we're interested in.

So we do that for a while, we skip over a little bit, and then we train the machine learning algorithm based on these couple of hundred things, and we hope that it's gotten a lot better. You can see, it's now started to fade some of these pictures out, showing us that it already is recognizing how to understand some of these itself. We can then use this concept of similar images, and using similar images, you can now see, the computer at this point is able to entirely find just the fronts of cars. So at this point, the human can tell the computer, okay, yes, you've done a good job of that.

Sometimes, of course, even at this point it's still difficult to separate out groups. In this case, even after we let the computer try to rotate this for a while, we still find that the left sides and the right sides pictures are all mixed up together. So we can again give the computer some hints, and we say, okay, try and find a projection that separates out the left sides and the right sides as much as possible using this deep learning algorithm. And giving it that hint -- ah, okay, it's been successful. It's managed to find a way of thinking about these objects that's separated out these together.

So you get the idea here. This is a case not where the human is being replaced by a computer, but where they're working together. What we're doing here is we're replacing something that used to take a team of five or six people about seven years and replacing it with something that takes 15 minutes for one person acting alone.

So this process takes about four or five iterations. You can see we now have 62 percent of our 1.5 million images classified correctly. And at this point, we can start to quite quickly grab whole big sections, check through them to make sure that there's no mistakes. Where there are mistakes, we can let the computer know about them. And using this kind of process for each of the different groups, we are now up to an 80 percent success rate in classifying the 1.5 million images. And at this point, it's just a case of finding the small number that aren't classified correctly, and trying to understand why. And using that approach, by 15 minutes we get to 97 percent classification rates.
So this kind of technique could allow us to fix a major problem, which is that there's a lack of medical expertise in the world. The World Economic Forum says that there's between a 10x and a 20x shortage of physicians in the developing world, and it would take about 300 years to train enough people to fix that problem. So imagine if we can help enhance their efficiency using these deep learning approaches?

So I'm very excited about the opportunities. I'm also concerned about the problems. The problem here is that every area in blue on this map is somewhere where services are over 80 percent of employment. What are services? These are services. These are also the exact things that computers have just learned how to do. So 80 percent of the world's employment in the developed world is stuff that computers have just learned how to do. What does that mean? Well, it'll be fine. They'll be replaced by other jobs. For example, there will be more jobs for data scientists. Well, not really. It doesn't take data scientists very long to build these things. For example, these four algorithms were all built by the same guy. So if you think, oh, it's all happened before, we've seen the results in the past of when new things come along and they get replaced by new jobs, what are these new jobs going to be? It's very hard for us to estimate this, because human performance grows at this gradual rate, but we now have a system, deep learning, that we know actually grows in capability exponentially. And we're here. So currently, we see the things around us and we say, "Oh, computers are still pretty dumb." Right? But in five years' time, computers will be off this chart. So we need to be starting to think about this capability right now.

We have seen this once before, of course. In the Industrial Revolution, we saw a step change in capability thanks to engines. The thing is, though, that after a while, things flattened out. There was social disruption, but once engines were used to generate power in all the situations, things really settled down. The Machine Learning Revolution is going to be very different from the Industrial Revolution, because the Machine Learning Revolution, it never settles down. The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before, so your previous understanding of what's possible is different.

This is already impacting us. In the last 25 years, as capital productivity has increased, labor productivity has been flat, in fact even a little bit down.

So I want us to start having this discussion now. I know that when I often tell people about this situation, people can be quite dismissive. Well, computers can't really think, they don't emote, they don't understand poetry, we don't really understand how they work. So what? Computers right now can do the things that humans spend most of their time being paid to do, so now's the time to start thinking about how we're going to adjust our social structures and economic structures to be aware of this new reality. Thank you!


2014年12月10日 星期三

OPEC策略激發出2015全球最大投資機會 - 超低油價區投資最具競爭力石油公司及商品 ( OPEC strategy on Oil production will stimulate a great investment opportunities in 2015 )

OPEC策略奏效?油價跌跌不休 美國頁岩油開發踩煞車

油價跌跌不休 美國頁岩油開發踩煞車, 全球經濟大機會
油價直直落令消費者受惠,但許多 OPEC 產油國大喊吃不消。現在除了委內瑞拉外,帶動這一波產油量大增、油價下跌的美國頁岩油商,也開始感受到低油價的衝擊。

《CNNMoney》報導,由於油價持續下滑,美國頁岩油開發商開始擔心,開發新專案可能獲利不佳。美國油商 ConocoPhillips (COP-US) 率先表示將大砍 2015 年的支出,金融市場普遍預期將有其他業者跟進。看來 OPEC 打擊美國頁岩油的策略已開始看到成效了。

油價自 6 月以來已重挫了 40%,不少經濟較脆弱的產油國都希望 OPEC 能減產,但以沙烏地阿拉伯首的 OPEC 於 11 月底決定不減產,令油價一路走跌。油市普遍認為,OPEC 此舉是為打擊美國的頁岩油開發,以避免市占率被搶。

油價跌跌不休,對全球消費者來說是一大福音,但卻對能源產業帶來極大衝擊。部分國家如伊朗和委內瑞拉的產油成本都偏高,低油價令他們入不敷出。晨星負責研究能源公司的分析師 Allen Good 認為,可能會看到美國油商普遍調降資本支出。
2015 以空原物料為主,特別是石油價格
主要頁岩油商之一的 ConocoPhillips 決定調降明年資本支出 20% 引人側目。該公司表示,新的目標反映主要專案的支出降低。雖然分析師認為,其他油商也可能會跟進減少支出,但不表示美國頁岩油生產已死,只是未來幾年內,頁岩油的資本支出可能不會大增。

ConocoPhillips 執行長 Ryan Lance 指出,有鑑於目前的環境,這是保守的支出計畫。這顯示該公司預期未來一、兩年油價可能都會維持在低點,令投資人感到憂心。ConocoPhillips 公佈消息後股價下挫 3%,為 2 月底以來首見跌幅。

Conoco 外,摩根士丹利週一發佈的報告也提出警告,原油面臨 2008 年金融危機以來最大威脅,明年可能短暫跌至 35~40 美元,然後才會回升。

上週艾克森美孚 (XOM-US) 及雪佛龍 (CVX-US) 才表示,即使油價跌到 40 美元也不受影響。Oppenheimer 的分析師 Fadel Gheit 也指出,雪佛龍認為油價 40 美元對公司來說不成問題。Gheit 認為,對雪佛龍和艾克森來說,油價最好是大跌,嚇壞其他業者,這時就會是收購的好機會。他預估,如果明年夏季前油價沒有回升,就會出現一波併購潮。

但艾克森美孚及雪佛龍都是大型油商,財務較具彈性。儘管如此,晨星仍認為兩者也將宣佈未來幾個月將微幅調降支出,但幅度不會如 Conoco 那麼大。Gheit 也指出,即使是艾克森美孚,也無法在每桶 40 美元的大環境下支撐太久。若油價太低,包括艾克森在內,都無法創造足夠的現金流以進行資本支出和支付股利,這表示公司就必須出售資產、大砍支出或發債。

另外,產油業者減少資本支出,直接受創的就是仰賴這個產業的重機具生產商。Schlumberger (SLB-US) 及 Halliburton (HAL-US) 股價雙雙下滑 2%。

$40 oil doesn't scare Big Oil. Here's why

If OPEC hoped to scare Big Oil, it's not working. Oil prices have plunged below $70 per barrel following the cartel's decision to keep production steady despite tumbling prices.

But oil heavyweights ExxonMobil (XOM) and Chevron (CVX) claim they aren't losing sleep over the oil prices drop. In fact, they could survive oil as low as $40 per barrel. Exxon CEO Rex Tillerson told CNBC on Wednesday his company's massive energy projects are decade-long investment decisions that have been tested to be successful even "at the bottom of the cycle." "We test across a range all the way down to $40 and up to $120," Tillerson said.

Chevron also believes it could weather the storm down to $40 a barrel, according to Oppenheimer analyst Fadel Gheit, who cited a recent conversation with executives from the energy giant. Chevron did not respond to CNNMoney's request for comment.

Fire sales ahead? The defiant statements show how the American energy industry is not backing down against OPEC, which appears to be attempting to choke off the U.S. shale boom with painfully low prices.

If oil prices remain low -- or even tumble further -- some smaller energy companies and high-cost producers are likely to find themselves in serious financial trouble.

That could present a buying opportunity for Big Oil companies that have the financial flexibility to take advantage of a fire sale. "The best thing for Chevron and Exxon...is to see oil prices crashing and scare the hell out of everybody else. It becomes a window of opportunity" for acquisitions, said Gheit. He predicted a wave of mergers and acquisitions if oil prices don't recover by next summer.

Exxon CEO says company can weather $40 oil

放空石油 ETF SCO 已經漲超過 100%
SAN FRANCISCO (MarketWatch) -- Exxon Mobil Corp. XOM, -2.26% has tested the profitability of its oil plays against crude-oil prices as low as $40 a barrel and as high as $120 a barrel, CEO Rex Tillerson told CNBC on Wednesday. Exxon makes decades-long investment decisions, and its exploration and production projects have been tested to accomodate price swings, Tillerson said. Shares of Exxon rose on Wednesday, and crude-oil futures CLF5, -0.87% also gained after a weekly supply report showed a surprise decline in inventories. Crude futures have plunged in recent sessions after OPEC dashed hopes of an output cut that would help boost oil prices, which have fallen nearly 40% since June.

原油風暴擴散,奈及利亞恐成OPEC下一個未爆彈

油價崩盤猶如「秋風掃落葉」、橫掃OPEC產油國,繼全球最大原油儲備國委內瑞拉傳出兩年內可能破產消息之後,非洲的奈及利亞被認為可能是下一個未爆彈。

奈及利亞是OPEC組織成員中人口最多國家,但卻也深受國內宗教派系與種族衝突所苦,石油出口不僅為該國主要的經濟來源,目前也是內部團結的唯一支撐。CNBC.com報導,RBC Capitaly在上週發佈的報告指出,如果油價持續走跌,奈及利亞爆發內戰機率將是所有OPEC國家中最高。奈及利亞股市指數在7月的時候觸頂,當時布蘭特原油最高一度來到每桶115美元。但隨著油價週二跌至66美元附近,奈及利亞股市指數也自最高點滑落23%。

據巴克萊銀行估計,石油與天然氣所創造的收入占奈及利亞出口產值比重高達95%、占其政府預算也在70%左右,油價每下跌1美元,奈及利亞出口收入就減少7億美元,據此推算,奈及利亞今年經常帳恐由盈轉虧。在OPEC產油國殺價搶市的同時,美國卻坐享漁翁之利。低油價一方面除了有利美國以及日本、埃及與以色列等友邦經濟復甦外,另一方面則重創俄羅斯、伊朗、敘利亞與委內瑞拉等敵對國家展望,為美國創造國內經濟、國外戰略均受益雙贏的局面。

分析

2014年10月15日 星期三

全世界最好的創意廚藝餐廳 El Celler de Can Roca - 食物及養生之色、香、味藝術 ( World's best creative Chef and restaurant - El Celler de Can Roca with Love Songs 2014)

 El Celler de Can Roca 位於西班牙, 加泰羅尼亞於赫羅納的餐廳,於1986年開業的 Roca 三兄弟:何塞普( Josep )、霍爾迪 ( Jordi )及和瓊可羅卡 ( Joan Roca )。這是第一次,毗鄰父母的可羅卡 餐廳 ( Can Roca Restaurant ),但遷至現為特定目的建造的建築於2007年,已收到熱烈的批評,並擁有米其林三顆星。在2013年,它被由雜誌餐廳 ( the magazine Restaurant ) 評為世界最好的餐廳,在2011年和2012年後已經排名第二。 El Celler de Can Roca  講究食物之色、香、味、養生及藝術,除了供創意廚藝餐廳,也提供書籍出版,將觀光、文創、食品、廚藝餐飲結合成一體
 
The Roca brothers' passion for cooking was initially kindled in Can Roca, the establishment their
parents manage in Taialà, a neighborhood lying on the outskirts of Girona.

It is where they grew up, amid the hubbub of dishes, pots and pans and clients. The bar was their living room, their playground, where they did their homework, watched television... whence the aroma of the stews generously, simply and honestly prepared by their mother, wafted in.

On November 25, 2009, in the Bar Tomate in Madrid, Josep, Jordi and Joan Roca celebrated the third star awarded by the Michelin Guide to El Celler de Can Roca. “The only thing that matters is that the client is satisfied and wants to return, regardless of whether the restaurant is number one or number two, three or fifty, or whether we have three stars or two”, says Joan.
Music And Love Songs 2014


Besug de la piga, yuzu i tàperes