2015年1月7日 星期三

從機器學習到人類社會結構的革命 - 20年內巨型雲端機器人將改變人類社會 ( From machine learning to human social structures Revolution - the giant cloud robot will change human society within 20 years )

It used to be that if you wanted to get a computer to do something new, you would have to program it. Now, programming, for those of you here that haven't done it yourself, requires laying out in excruciating detail every single step that you want the computer to do in order to achieve your goal. Now, if you want to do something that you don't know how to do yourself, then this is going to be a great challenge.

So this was the challenge faced by this man, Arthur Samuel. In 1956, he wanted to get this computer to be able to beat him at checkers. How can you write a program, lay out in excruciating detail, how to be better than you at checkers? So he came up with an idea: he had the computer play against itself thousands of times and learn how to play checkers. And indeed it worked, and in fact, by 1962, this computer had beaten the Connecticut state champion.

So Arthur Samuel was the father of machine learning, and I have a great debt to him, because I am a machine learning practitioner. I was the president of Kaggle, a community of over 200,000 machine learning practictioners. Kaggle puts up competitions to try and get them to solve previously unsolved problems, and it's been successful hundreds of times. So from this vantage point, I was able to find out a lot about what machine learning can do in the past, can do today, and what it could do in the future. Perhaps the first big success of machine learning commercially was Google. Google showed that it is possible to find information by using a computer algorithm, and this algorithm is based on machine learning. Since that time, there have been many commercial successes of machine learning. Companies like Amazon and Netflix use machine learning to suggest products that you might like to buy, movies that you might like to watch. Sometimes, it's almost creepy. Companies like LinkedIn and Facebook sometimes will tell you about who your friends might be and you have no idea how it did it, and this is because it's using the power of machine learning. These are algorithms that have learned how to do this from data rather than being programmed by hand.

This is also how IBM was successful in getting Watson to beat the two world champions at "Jeopardy," answering incredibly subtle and complex questions like this one. ["The ancient 'Lion of Nimrud' went missing from this city's national museum in 2003 (along with a lot of other stuff)"] This is also why we are now able to see the first self-driving cars. If you want to be able to tell the difference between, say, a tree and a pedestrian, well, that's pretty important. We don't know how to write those programs by hand, but with machine learning, this is now possible. And in fact, this car has driven over a million miles without any accidents on regular roads.

So we now know that computers can learn, and computers can learn to do things that we actually sometimes don't know how to do ourselves, or maybe can do them better than us. One of the most amazing examples I've seen of machine learning happened on a project that I ran at Kaggle where a team run by a guy called Geoffrey Hinton from the University of Toronto won a competition for automatic drug discovery. Now, what was extraordinary here is not just that they beat all of the algorithms developed by Merck or the international academic community, but nobody on the team had any background in chemistry or biology or life sciences, and they did it in two weeks. How did they do this? They used an extraordinary algorithm called deep learning. So important was this that in fact the success was covered in The New York Times in a front page article a few weeks later. This is Geoffrey Hinton here on the left-hand side. Deep learning is an algorithm inspired by how the human brain works, and as a result it's an algorithm which has no theoretical limitations on what it can do. The more data you give it and the more computation time you give it, the better it gets.

The New York Times also showed in this article another extraordinary result of deep learning which I'm going to show you now. It shows that computers can listen and understand.

(Video) Richard Rashid: Now, the last step that I want to be able to take in this process is to actually speak to you in Chinese. Now the key thing there is, we've been able to take a large amount of information from many Chinese speakers and produce a text-to-speech system that takes Chinese text and converts it into Chinese language, and then we've taken an hour or so of my own voice and we've used that to modulate the standard text-to-speech system so that it would sound like me. Again, the result's not perfect. There are in fact quite a few errors. (In Chinese) (Applause) There's much work to be done in this area. (In Chinese)

Jeremy Howard: Well, that was at a machine learning conference in China. It's not often, actually, at academic conferences that you do hear spontaneous applause, although of course sometimes at TEDx conferences, feel free. Everything you saw there was happening with deep learning. (Applause) Thank you. The transcription in English was deep learning. The translation to Chinese and the text in the top right, deep learning, and the construction of the voice was deep learning as well.

So deep learning is this extraordinary thing. It's a single algorithm that can seem to do almost anything, and I discovered that a year earlier, it had also learned to see. In this obscure competition from Germany called the German Traffic Sign Recognition Benchmark, deep learning had learned to recognize traffic signs like this one. Not only could it recognize the traffic signs better than any other algorithm, the leaderboard actually showed it was better than people, about twice as good as people. So by 2011, we had the first example of computers that can see better than people. Since that time, a lot has happened. In 2012, Google announced that they had a deep learning algorithm watch YouTube videos and crunched the data on 16,000 computers for a month, and the computer independently learned about concepts such as people and cats just by watching the videos. This is much like the way that humans learn. Humans don't learn by being told what they see, but by learning for themselves what these things are. Also in 2012, Geoffrey Hinton, who we saw earlier, won the very popular ImageNet competition, looking to try to figure out from one and a half million images what they're pictures of. As of 2014, we're now down to a six percent error rate in image recognition. This is better than people, again.

So machines really are doing an extraordinarily good job of this, and it is now being used in industry. For example, Google announced last year that they had mapped every single location in France in two hours, and the way they did it was that they fed street view images into a deep learning algorithm to recognize and read street numbers. Imagine how long it would have taken before: dozens of people, many years. This is also happening in China. Baidu is kind of the Chinese Google, I guess, and what you see here in the top left is an example of a picture that I uploaded to Baidu's deep learning system, and underneath you can see that the system has understood what that picture is and found similar images. The similar images actually have similar backgrounds, similar directions of the faces, even some with their tongue out. This is not clearly looking at the text of a web page. All I uploaded was an image. So we now have computers which really understand what they see and can therefore search databases of hundreds of millions of images in real time.

So what does it mean now that computers can see? Well, it's not just that computers can see. In fact, deep learning has done more than that. Complex, nuanced sentences like this one are now understandable with deep learning algorithms. As you can see here, this Stanford-based system showing the red dot at the top has figured out that this sentence is expressing negative sentiment. Deep learning now in fact is near human performance at understanding what sentences are about and what it is saying about those things. Also, deep learning has been used to read Chinese, again at about native Chinese speaker level. This algorithm developed out of Switzerland by people, none of whom speak or understand any Chinese. As I say, using deep learning is about the best system in the world for this, even compared to native human understanding.

This is a system that we put together at my company which shows putting all this stuff together. These are pictures which have no text attached, and as I'm typing in here sentences, in real time it's understanding these pictures and figuring out what they're about and finding pictures that are similar to the text that I'm writing. So you can see, it's actually understanding my sentences and actually understanding these pictures. I know that you've seen something like this on Google, where you can type in things and it will show you pictures, but actually what it's doing is it's searching the webpage for the text. This is very different from actually understanding the images. This is something that computers have only been able to do for the first time in the last few months.

So we can see now that computers can not only see but they can also read, and, of course, we've shown that they can understand what they hear. Perhaps not surprising now that I'm going to tell you they can write. Here is some text that I generated using a deep learning algorithm yesterday. And here is some text that an algorithm out of Stanford generated. Each of these sentences was generated by a deep learning algorithm to describe each of those pictures. This algorithm before has never seen a man in a black shirt playing a guitar. It's seen a man before, it's seen black before, it's seen a guitar before, but it has independently generated this novel description of this picture. We're still not quite at human performance here, but we're close. In tests, humans prefer the computer-generated caption one out of four times. Now this system is now only two weeks old, so probably within the next year, the computer algorithm will be well past human performance at the rate things are going. So computers can also write.

So we put all this together and it leads to very exciting opportunities. For example, in medicine, a team in Boston announced that they had discovered dozens of new clinically relevant features of tumors which help doctors make a prognosis of a cancer. Very similarly, in Stanford, a group there announced that, looking at tissues under magnification, they've developed a machine learning-based system which in fact is better than human pathologists at predicting survival rates for cancer sufferers. In both of these cases, not only were the predictions more accurate, but they generated new insightful science. In the radiology case, they were new clinical indicators that humans can understand. In this pathology case, the computer system actually discovered that the cells around the cancer are as important as the cancer cells themselves in making a diagnosis. This is the opposite of what pathologists had been taught for decades. In each of those two cases, they were systems developed by a combination of medical experts and machine learning experts, but as of last year, we're now beyond that too. This is an example of identifying cancerous areas of human tissue under a microscope. The system being shown here can identify those areas more accurately, or about as accurately, as human pathologists, but was built entirely with deep learning using no medical expertise by people who have no background in the field. Similarly, here, this neuron segmentation. We can now segment neurons about as accurately as humans can, but this system was developed with deep learning using people with no previous background in medicine.

So myself, as somebody with no previous background in medicine, I seem to be entirely well qualified to start a new medical company, which I did. I was kind of terrified of doing it, but the theory seemed to suggest that it ought to be possible to do very useful medicine using just these data analytic techniques. And thankfully, the feedback has been fantastic, not just from the media but from the medical community, who have been very supportive. The theory is that we can take the middle part of the medical process and turn that into data analysis as much as possible, leaving doctors to do what they're best at. I want to give you an example. It now takes us about 15 minutes to generate a new medical diagnostic test and I'll show you that in real time now, but I've compressed it down to three minutes by cutting some pieces out. Rather than showing you creating a medical diagnostic test, I'm going to show you a diagnostic test of car images, because that's something we can all understand.

So here we're starting with about 1.5 million car images, and I want to create something that can split them into the angle of the photo that's being taken. So these images are entirely unlabeled, so I have to start from scratch. With our deep learning algorithm, it can automatically identify areas of structure in these images. So the nice thing is that the human and the computer can now work together. So the human, as you can see here, is telling the computer about areas of interest which it wants the computer then to try and use to improve its algorithm. Now, these deep learning systems actually are in 16,000-dimensional space, so you can see here the computer rotating this through that space, trying to find new areas of structure. And when it does so successfully, the human who is driving it can then point out the areas that are interesting. So here, the computer has successfully found areas, for example, angles. So as we go through this process, we're gradually telling the computer more and more about the kinds of structures we're looking for. You can imagine in a diagnostic test this would be a pathologist identifying areas of pathosis, for example, or a radiologist indicating potentially troublesome nodules. And sometimes it can be difficult for the algorithm. In this case, it got kind of confused. The fronts and the backs of the cars are all mixed up. So here we have to be a bit more careful, manually selecting these fronts as opposed to the backs, then telling the computer that this is a type of group that we're interested in.

So we do that for a while, we skip over a little bit, and then we train the machine learning algorithm based on these couple of hundred things, and we hope that it's gotten a lot better. You can see, it's now started to fade some of these pictures out, showing us that it already is recognizing how to understand some of these itself. We can then use this concept of similar images, and using similar images, you can now see, the computer at this point is able to entirely find just the fronts of cars. So at this point, the human can tell the computer, okay, yes, you've done a good job of that.

Sometimes, of course, even at this point it's still difficult to separate out groups. In this case, even after we let the computer try to rotate this for a while, we still find that the left sides and the right sides pictures are all mixed up together. So we can again give the computer some hints, and we say, okay, try and find a projection that separates out the left sides and the right sides as much as possible using this deep learning algorithm. And giving it that hint -- ah, okay, it's been successful. It's managed to find a way of thinking about these objects that's separated out these together.

So you get the idea here. This is a case not where the human is being replaced by a computer, but where they're working together. What we're doing here is we're replacing something that used to take a team of five or six people about seven years and replacing it with something that takes 15 minutes for one person acting alone.

So this process takes about four or five iterations. You can see we now have 62 percent of our 1.5 million images classified correctly. And at this point, we can start to quite quickly grab whole big sections, check through them to make sure that there's no mistakes. Where there are mistakes, we can let the computer know about them. And using this kind of process for each of the different groups, we are now up to an 80 percent success rate in classifying the 1.5 million images. And at this point, it's just a case of finding the small number that aren't classified correctly, and trying to understand why. And using that approach, by 15 minutes we get to 97 percent classification rates.
So this kind of technique could allow us to fix a major problem, which is that there's a lack of medical expertise in the world. The World Economic Forum says that there's between a 10x and a 20x shortage of physicians in the developing world, and it would take about 300 years to train enough people to fix that problem. So imagine if we can help enhance their efficiency using these deep learning approaches?

So I'm very excited about the opportunities. I'm also concerned about the problems. The problem here is that every area in blue on this map is somewhere where services are over 80 percent of employment. What are services? These are services. These are also the exact things that computers have just learned how to do. So 80 percent of the world's employment in the developed world is stuff that computers have just learned how to do. What does that mean? Well, it'll be fine. They'll be replaced by other jobs. For example, there will be more jobs for data scientists. Well, not really. It doesn't take data scientists very long to build these things. For example, these four algorithms were all built by the same guy. So if you think, oh, it's all happened before, we've seen the results in the past of when new things come along and they get replaced by new jobs, what are these new jobs going to be? It's very hard for us to estimate this, because human performance grows at this gradual rate, but we now have a system, deep learning, that we know actually grows in capability exponentially. And we're here. So currently, we see the things around us and we say, "Oh, computers are still pretty dumb." Right? But in five years' time, computers will be off this chart. So we need to be starting to think about this capability right now.

We have seen this once before, of course. In the Industrial Revolution, we saw a step change in capability thanks to engines. The thing is, though, that after a while, things flattened out. There was social disruption, but once engines were used to generate power in all the situations, things really settled down. The Machine Learning Revolution is going to be very different from the Industrial Revolution, because the Machine Learning Revolution, it never settles down. The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before, so your previous understanding of what's possible is different.

This is already impacting us. In the last 25 years, as capital productivity has increased, labor productivity has been flat, in fact even a little bit down.

So I want us to start having this discussion now. I know that when I often tell people about this situation, people can be quite dismissive. Well, computers can't really think, they don't emote, they don't understand poetry, we don't really understand how they work. So what? Computers right now can do the things that humans spend most of their time being paid to do, so now's the time to start thinking about how we're going to adjust our social structures and economic structures to be aware of this new reality. Thank you!


2014年12月29日 星期一

從 Gmail、Line、Twitter 大幅在中國被封鎖,五年後大陸將是全球最極權不民主國家,中國下一代也將不易了解人民當家及中共極權之差異

Gmail中國遭封 網民表擔憂和不滿
中共也離譜是中國大陸年輕人在
使用 Gmail, 有些批評中共做事太
政治, 又不是 Google 喜歡給使用者
如此用, 但致少 Google 尊重使用者
不會像中共政權如此不尊重人民
思想的自由, 亂封鎖 Gmail, Line,
Twitter; 致少要有證據才能封鎖
某用戶, 五年後大陸將是全球最
極權國家, 也是最令人覺得奇怪
且羞愧的國家

有消息顯示Gmail在中國被封;有大量使用Gmail中國網民無法從自己電郵客戶端收發郵件,也讓很多網民感到不滿。Gamil網頁版在中國經常受干擾,連接很不穩定。很多不擅於翻牆的用戶不得不轉移到電腦或移動設備的電郵客戶端。這些電郵客戶端使用IMAP和POP3協議軟件,而不是通過Gmail的網絡版。

如今這些客戶端渠道也都被封鎖了。通過Gmail在中國的實時流量來看,從12月27日開始幾乎沒有什麼網民可以使用Gmail。中國很多網民都通過微博和推特等社交網站表達了自己的擔憂和不滿。洛之秋: 不少學生目前正是出國留學申請的關鍵時期,填寫的聯繫信箱都是gmail,這樣的封鎖將要給學生們和海外高校聯繫帶來極大不便。這段經歷大概會讓他們多少年後,在思考是否回國時更義無反顧。

劉興量在微博上表示,Gmail徹底失聯了,用簡單的話來解釋就是:以前Gmail只是網頁不能訪問,但是在手機以及電腦的客戶端上還是可以接受和發送郵件的。而現在,就不要奢望的你的郵件客戶端裏的郵件更新了。從來1840網友表示,我都不想說什麼好,只想彪髒話。天天搞些表面工程,到處貼的比牛皮癬還多。哎中國特色,最近Gmail也被封了 。

HeySuperman-伍壕網友說,別說Gmail了,Google相關的一切服務都用不了,用個郵箱也被封了,被噁心壞了,真草泥馬的煩,給生活帶了很多不便。修個圍牆把中國圍起來好了,這也符合我們的傳統乾脆閉關鎖國吧,外界信息全部封鎖掉跟朝鮮一樣多好有名為竹芒Reason的網友指出,Gmail國內徹底被封,國內的人不僅在網頁端、各種軟件客戶端收不到郵件,連發送到Gmail的郵件也會被強……這一招真尼瑪太狠了,連在國外的人也一塊被強了……得心裏多變態的人才會想出這種招啊

簡單說兩句網民:1、gmail不聲不響的就被封了2、武媚娘傳奇被叫停。只想問一句:為什麼?

名叫吳醫師的網友說:在國內,谷歌已被全線封殺,Gmail是孩子們喜歡用的郵箱,也早被封停了。雅虎也是時開時關,有些文章標題在但卻不能訪問,還有其它...宣傳部門不僅威武而且還滑頭。嘿嘿! 資深帥鍋網民則表示,商務部說2015年中國將改善外商投資環境,GMAIL被封死了,老外郵件都收不到了,改善了什麼環境?呵呵! 谷歌在2010年3月關閉在中國大陸的搜索服務google.cn,。就在今年六四事件25週年前夕,谷歌在中國的服務遭到封鎖,包括Gmail在內的谷歌公司的各項服務都在中國無法正常訪問和使用。

中國大陸封鎖LINE、Facebook、Google、Twitter ,竟是全世界唯一如此害怕民主的國家? ( Mainland China block LINE, Facebook, Google, Twitter, China became one of country so scary to democracy )
中國大陸總負債 2012 達到GDP 277%, 到達 2014
就會超越德國, 預估到 2016 極可能與義大利一樣成為高負債國
 

早前網上傳出消息,指剛剛開始打進中國市場的即時通訊程式LINE(中國官方名稱為「連我」),由7月1日起無法在內地使用,惹來當局出手干預程式的疑雲。此舉令一向逆來順受的內地網民終於怒起來,批判中央政府剝削他們的選擇權,更擔心甚受歡迎的相片分享程式 Instagram 同樣會被封殺,令中國網民與世界脫軌。

中國的GFW(Great Fire Wall,國家防火牆)一直被「譽」為中國的虛擬萬里長城,針對某些關鍵字、IP位址及特定網站作出過濾及屏蔽,最為人熟悉的就是GFW封鎖香港人常用的影片網站YouTube、社交網站facebook及搜尋器Google等。最近有中國網民報告指,在7月1日起,中國用家開啟LINE傳送訊息後,只會在訊息旁邊顯示一個感歎號,意指訊息未能傳送。有網民嘗試多次刪除軟件再重新安裝,皆面對同樣問題,有網友擔心LINE的聊天紀錄及特別購買的表情圖案皆會消失,便在LINE的官方微博中求助。LINE透過官方微博表示,「現在LINE中國用戶出現了訪問障礙,我們正在盡最大努力修復該問題。」

... 持續閱讀

分析
  • 中國大陸封鎖 LINE、Facebook、Google、Twitter 做法其實不是很恰當,以全世界民主國家成長程度,中國大陸民主是不如非洲,這是一種恥辱,有必要改善讓人民有社群活動自由,全世界少有政府如此極權與嚴格控制人民社群活動;2019 ~ 2020 後當中國大陸人民都比非洲澳養的羊還不自由那不是很丟臉中國國家領導人該給人民基本民主自由,就像中國國家領導人給自已兒女基本民主自由一樣,這才會是偉大的事
  • 依中國大陸國民所得成長率及教育普及率,人民有社群活動自由及人民有基本的民主選舉是應當做的,中國大陸的民主才是全球華人的榮耀,中國大陸的不民主及網路封鎖是全球華人的恥辱;只有中共高層子女可以在歐美過著自由民主生活,卻阻擋13億中國大陸人民有基本的民主,這顯然是非常大的恥辱
  • 依中國大陸對中國年輕人的網路封鎖將逐漸延伸至思想監控中國大陸下一代年輕人將不易了解人民當家及中共極權之差異,台灣則需更小心勿太靠近中國大陸多擴充其他國家之貿易及人民關係才是保護中華民台灣之法寶

2014年12月19日 星期五

中國與中亞國家加速合作,俄羅斯卻因低油價、盧布暴跌及烏克蘭問題進入危機 - 中國勢力深入中亞及俄 ( Russia crisis speed up China force into Central Asia and extend China force to Europe )

面向中亞 中國抬頭,俄羅斯低頭

不到10年前,沒什麼人會去質疑,中亞的新獨立國家會將巨量石油和天然氣供給送往何處,它們的能源基礎建設和市場皆是由俄羅斯主導。然而到了今日,當有新礦區加入供給,油管卻是朝東方通往中國。本周,習近平於出訪中亞之際,簽訂了各項能源交易並承諾大舉投資,這次訪問之後,沒有人會懷疑,誰才是中亞的新經濟強權。

土庫曼原已是中國最大的天然氣供給者,習近平在此參加了全球第二大天然氣田的啟用儀式,中國自土庫曼進口的天然氣可望增為3倍。而在哈薩克的300億美元交易中,則包括數十年來全球最大新油田的股份。習近平還與烏茲別克總統卡里莫夫(Islam Karimov)一同公佈了150億的石油、天然氣及鈾礦交易。

此區的5個國家之中,中國已經是其中4國的最大貿易夥伴(唯一的例外是烏茲別克)。據中國國家媒體報導,中國與中亞的貿易額去年達460億美元,為中亞國家在20年前脫離蘇聯獨立之時的100倍。雙方雖然都沒有這麼說,但中國在中亞的地位日增,代價即為俄羅斯的地位下滑。

俄羅斯仍舊掌控中亞大多數的能源出口,但在此區的相對經濟影響力也在減低。數年來,俄羅斯一直將中亞視為自家後院,堅持以低於市場的價格收購石油和天然氣,並加價再出口。正因為如此,哈薩克和土庫曼才會想要投向中國的懷抱。

不過,俄羅斯和中國還是十分在乎中俄關係。俄羅斯希望能從中國的強大經濟中獲取利益,中國則視俄羅斯為世界舞台上的重要盟友。由此觀之,中俄既會相互競爭,也會相互合作,至少目前是如此。莫斯科的中國專家卡辛(Vasily Kashin)表示,中亞國家會藉由中俄競爭取得最大利益,俄羅斯也已經接受此事。 ... 持續閱讀

中國與哈薩克簽署180億美元訂單,確定雙方產能合作架構

中國與哈薩克敲定180億美元大筆訂單,中國總理李克強力推的中國與哈薩克簽訂產能合作框架,可謂雙方在第一時間找到了契合點。

  李克強從與哈薩克馬西莫夫總理的大小範圍會談、共同出席中哈企業家委員會活動到共進晚宴,再到和哈總統納紮爾巴耶夫長達1個半小時的深談,李克強總理在14日的活動中和哈領導人深入交流,介紹中哈產能合作會給雙方帶來的巨大利益。按照初步規劃,180億美元資金將主要由哈薩克通過自籌或向國際機構貸款籌集,剩下的缺口中方將以貸款形式提供支持,主要用於幫助雙方企業參與這項計劃,由此敲動的合作“蛋糕”遠不只180億美元。

  分析師建議關注中國的基礎工程公司包括中成股份(000151)、北方國際(000065)、徐工機械(000425)、中國中冶(601618)、中國交建(601800)、柳工。

中國與哈薩克斯坦達成雙邊貨幣互換協議

中國央行在其網站上發布消息稱,已與哈薩克斯坦央行達成一項人民幣70億元雙邊貨幣互換協議,這將推進人民幣國際化進程。以下是央行聲明:

經國務院批準,2014年12月14日,中國人民銀行與哈薩克斯坦國家銀行在阿斯塔納續簽了雙邊本幣互換協議,同時簽訂了新的雙邊本幣結算與支付協議。雙邊本幣互換規模為70億元人民幣/2000億哈薩克堅戈,協議有效期三年,經雙方同意可以展期。雙邊本幣結算與支付協議簽訂后,中哈本幣結算從邊境貿易擴大到一般貿易。兩國經濟活動主體可自行決定用自由兌換貨幣、人民幣和哈薩克堅戈進行商品和服務的結算與支付。

盧布保衛戰失敗:俄羅斯可能遭遇前蘇聯式突然崩潰

俄羅斯央行的大幅加息之舉未能遏止盧布匯率暴跌,有外媒稱,俄羅斯有可能遭遇前蘇聯式的突然崩潰。但也有外媒注意到,許多俄羅斯人並沒有將該國遇到的問題歸咎於普京。

美國《華盛頓郵報》報導說,俄羅斯經濟似乎正滑向普京在這個國家當權15年來的最動盪時刻,隨著投資者對俄羅斯基本制度的信任迅速消失,俄羅斯未來將何去何從這一根本性問題擺在了人們面前。這種恐懼感是由盧布匯率的直線下挫引發的,盡管俄羅斯央行周二晚間決定大幅加息以遏制本幣貶值,但盧布兌美元匯率還是在僅僅兩天之內就下跌了17 %。報導說,外界一直認為普京政權的基礎是他與俄羅斯民眾間一項心照不宣的交易:普京保證俄羅斯獲得經濟繁榮和穩定,而俄羅斯人則對該國不存在真正反對黨的局面持默認態度。而現在,普京這一方似乎開始失信了。

報導說,俄羅斯的一些最高經濟官員周二稱,俄羅斯人不得不適應生活水平下降的現實。零售商們周二說, 一些俄羅斯人開始搶購汽車和家電等大件商品,這喚起了俄羅斯人對1998年金融危機的痛苦記憶。俄羅斯政府預計,年底時該國的通貨膨脹率將突破10%,而俄羅斯經濟明年將陷入衰退。但報導也說,許多俄羅斯人周二表示,他們沒有將俄羅斯當前遇到的問題歸咎於普京。一名購物者說,現在除了工資不漲,什么都在漲,但我們俄羅斯人是習慣了過苦日子的。

英國《衛報》報導說,在盧布匯率繼續大幅下跌之際,許多眼見著自己的積蓄在不斷縮水的俄羅斯人以冷笑話來發泄心中的挫折感,一些人則開始搶購商品。但報導也注意到,盡管盧布匯率屢創新低,但俄羅斯人的情緒總體上依然平靜,幾乎看不到什么恐慌跡象。報導援引一名退休金領取者的話說,她沒有存款,所以也不必擔心財產貶值,但她也抱怨說自己每月12,000盧布的退休金越來越不禁花了。

英國《每日電訊報》報導說,隨著盧布保衛戰的失敗,俄羅斯有可能遭遇前蘇聯式的突然崩潰。報導說,在俄羅斯央行通過大幅加息來遏止盧布崩盤的努力失敗后,俄羅斯已經喪失了對其經濟的控制,它有可能被迫采取前蘇聯式的外匯管制。報導援引俄羅斯央行副行長sergei shvetsov的話說,目前形勢嚴峻,這種噩夢般狀況即使在一年前也是不可想象的。瑞士投行swissinvest的anthony peters說,雖然俄羅斯人民面對困苦生活時的堅忍是出了名的,但也要提防他們耐心耗盡的那一刻。報導援引國際金融學會研究員lubomir mitov的話說,如果俄羅斯的外匯儲備降至3,300億美元以下,鑒於該國的龐大外債規模以及各種壓力的匯集,俄羅斯將步入險境。截至本月初,俄羅斯的外匯儲備為4,160億美元。

報導說,過去一年里盧布兌美元已經貶值了56%,這導致俄羅斯gdp縮減至1.1 萬億美元,從而使得以美元計價的俄羅斯外債相當於該國 gdp的至少70%,這在評級機構眼里已經達到高危水平。南非標準銀行的tim ash說,俄羅斯主權債務的評級降至垃圾級只是時間問題。

中國高鐵技術輸出 打造中歐陸海快線

中國的鐵路技術輸出有新的嶄獲,國務院總理李克強今天與塞爾維亞、匈牙利和馬其頓總理會面,同意共同打造「中歐陸海快線」。李克強今天與三國總理,在塞爾維亞首都貝爾格勒,一起見證合作建設鐵路的簽署儀式。據了解,400公里長的匈塞鐵路,將在2017年中旬完成,未來「中歐陸海快線」將會以此為基礎,持續延長升級。最後這條快線將南起希臘的彼里夫斯港,北至匈牙利的布達佩斯,途經馬其頓和貝爾格勒,建成之後將為中國和歐洲的貿易交流,開闢一條新的便捷航線。李克強今天在哈薩克與俄羅斯總理梅維迪夫會面,也討論到,願與俄國加強高鐵合作。

大陸國務院總理李克強昨(15)日上午在哈薩克首都阿斯塔納會見俄羅斯總理麥維德夫時指出,大陸方面願同俄方擴大兩國能源合作,並加強兩國高鐵合作,研究好莫斯科到喀山的高鐵建案。

新華社報導,李克強表示,中俄全面戰略協作夥伴關係和各領域合作,目前情況良好,希望中俄高鐵合作工作組,能加緊研究莫斯科-喀山高鐵案相關問題,並儘早提出總體合作方案,以發揮中俄在俄國遠東地區的合作互補優勢。李克強說,陸方願擴大向俄國遠東地區投資,參與遠東跨越式開發區建設,中方也願同俄方探討在非資源領域,尤其是基礎設施建設領域開展互利合作。

李克強指出,陸方願同俄方及相關各方一道,推動上海合作組織繼續朝著有利於地區安全和發展的方向前進,並共同辦好明年在大陸舉辦的「第二屆中俄博覽會」和在俄羅斯舉辦的「葉卡捷琳堡國際創新工業展」,針對兩國總理兩個月前在莫斯科舉辦「中俄總理第19次定期會晤」,兩國政府正在積極落實該會晤所達成的合作共識。

俄羅斯總理麥維德夫表示,雙方的重點合作項目正在穩定的推進,俄方願進一步加強兩國在油氣、核能、水電、金融、科技創新、航空製造、航太等領域,以及俄羅斯遠東地區的合作,俄方也願同陸方加強在上海合作組織框架內的合作,以維護地區的和平、穩定與發展。

2001年成立的上海合作組織,除了中亞四國外,最重要的會員國即是俄羅斯,2015年的上海合作組織年度峰會,也將在俄國境內的烏法(Ufa)舉行。

中國政府網報導,本屆的年度峰會的東道主哈薩克總統納紮爾巴耶夫,也在當地時間15日會見了包含李克強和麥維德夫在內的各會員國總理、上海合作組織秘書長,和大陸籍的地區反恐機構執委會主任張新楓。


分析