顯示具有 閱讀 標籤的文章。 顯示所有文章
顯示具有 閱讀 標籤的文章。 顯示所有文章

2015年7月10日 星期五

台灣3000公尺 以上的高山有268座,台灣人應該充滿自信,學習瑞士的精神 ( Taiwan have 268 mountains above 3000m, that is amazing recording in the world, people in Taiwan should be confident and learn the spirit of Switzerland to stand out to the world )

常問人們:「台灣3000公尺 以上的高山有幾座?」大部分的人會回答1~10座不等, 當他聽到答案是268座時,都會驚訝覺得:「不可能,沒有人告訴過我,電視或報章雜誌也都未曾看到過。」

這也難怪,因為荷蘭.明鄭.清帝國都沒有統治過台灣廣大的.超過一半的山地,即使日本也是到1915年才統治包括高山的整個台灣。

日本之後的國民政府並沒有宣揚台灣山地的廣大,反而一再強調台灣是「彈丸之地」,即使到現在還是有雜誌書籍形容台灣很美,是蝴蝶的故鄉,或說台灣像是前途不定的風中之葉。
這些種種原因都讓大家不易知道台灣單單3000公尺 以上的高山就有268座,是世界少有的。這麼高大的山與森林不去觀察與思考,偏偏只想到蝴蝶與落葉,當然沒有辦法感受到台灣的神奇美妙。

日本的富士山高3776公尺 ,聞名全世界,但是日本全國3000公尺 以上的高山只有十餘座;英國有高地,但實際上3000 公尺 以上的高山完全沒有;

電影魔戒全景在紐西蘭拍攝,片中的高山雪景讓全球觀眾著迷,然而紐西蘭的最高峰 Cook 山高3744公尺 ,全國高過3000公尺 的高山共20餘座。 這樣比較大家就會了解台灣268的偉大了!

台灣268座高山不但高大,夏天還會開滿遍地花朵,非常美麗,很多是台灣特有的種類。此時站上台灣的屋頂,可以說是藍天不斷,高山連綿,花彩繽紛。 冬天的台灣268,很多有可能下雪,讓身在亞熱帶的台灣人民可以賞雪,親自品嚐高山白雪的滋味。

為什麼要提倡台灣268?因為她能讓台灣人民看到真正的台灣,高大又美麗的台灣。讓台灣人民除了認真.聰明.善良以外,能夠每天像台灣268一樣,充滿自信。

工匠精神、充滿自信及智慧的全球管理

瑞士是故意的 工匠精神打造最低失業率, 短短百餘年,瑞士從窮鄉僻壤變為全球經濟表現最耀眼、形象最優質的小國典範。為什麼?其實這是舉國上下精心策劃、合作的結果。講白一點,瑞士很多策略,都是「故意」的!

“工匠精神”可以從瑞士製表匠的例子上一窺究竟。瑞士製表商對每一個零件、每一道工序、每一塊手錶都精心打磨、專心雕琢、他們用心製造產品的態度就是工匠精神的思維和理念。在工匠們的眼裡,只有對質量的精益求精、對製造的一絲不苟、對完美的孜孜追求,除此之外,沒有其他。正是憑著這種凝神專一的工匠精神,瑞士手錶得以譽滿天下、暢銷世界、成為經典。

工匠精神不是瑞士​​的專利,日本式管理有一個絕招:用精益求精的態度,把一種熱愛工作的精神代代相傳。這種精神其實就是“工匠精神”。

台灣需要充滿自信面對全世界及中國大陸,盡管中國大陸對台灣很差很惡,從未對拋棄過台灣表是歉意,我們仍需讓台灣走出瑞士的精神!

参考:
https://picasaweb.google.com/103347542292749582834/Taiwan3000m#slideshow/5655966843162429138

2015年3月8日 星期日

620億美元的秘密︰巴菲特雪球傳奇全紀錄 - 非常值得理財投資人閱讀 ( $ 62 billion secret ︰ Buffett snowball legend Record - A well worth the financial investment Reading Book )

620億美元的秘密︰巴菲特雪球傳奇全紀錄

作者王寶玲 Dr. Jack Wang 是華人世界非文學類暢銷書最多的本土作家,台灣大學經濟系畢業,台大經研所、美國UCLA MBA、UCLA統計學博士。他自已在書籍最後幾頁也寫到巴菲特、鄭瑞清老師讓他真正能獲得財富自由之兩大投資智慧,我認為2015美股、台股都將進入歷史高點等待 2015 下半年 ~ 2016 另一回檔大修正時間,學習如何做巴菲特價值投資選股,再配合資金分配方法定指標定額投資法選擇最有智慧的買點來投資獲利,才不致於大部份時間在做短線賺賠中浪費掉人生!那不僅無益於你人生投資智慧,更可能讓你家庭帶來危機!

依照書上記載巴菲特 26歲財富累積至174000美元,運用價值投資到84歲財富成長至 727億美元,實現每年年報酬率 25% 復利的成長奇蹟 (註: 727億 = [1 + 25%]^[84-26] ),凡許多投資老師號稱能每年年報酬率能超過 80% 的方法,基本上大部份是騙人的,年報酬率超過 50% 的投資方法大部分是屬於大賺大賠之方法,有可能讓你一大賠不能翻身,小心為妙。

這本書提醒,「假如你不開始去做,你就不可能成功。賺錢的第一步就是開始著手…全國有無數的人想賺錢,但他們卻沒有賺到錢,因為他們總是在等這個、等那個。」開始著手!這本書如此告誡讀者。( 註:每週選擇 10 個公司及其股票仔細研究其公司價值及公司高層之誠信,開始學習價值投資 )。

巴菲特從葛拉漢那裡學到三項主要原則,要遵守這些原則,在心理上必須不受他人影響:
  • 股票是擁有某家公司的一小部分的權利。你願意為整家公司付多少錢,其股票的價值就是這金額的某個比例。
  • 運用安全邊際。投資是建立在估計值與不確定性之上,較大的安全邊際可預防正確的決定遭到意外的錯誤而抵銷。想要前進,最重要的前提就是不能後退。
  • 市場先生是你的僕人,而不是主人。葛拉漢虛構出一個善變的人物,名叫「市場先生」,這個人每天都想和你買賣股票,但他提出的價格往往不合理。市場先生的陰睛不定不該影響你對股價的看法,不過他有時也會給你買低賣高的機會。
巴菲特的秘密
  • 秘密一:巴菲特的不凡身世與致富之路
    • 生於經濟大蕭條,六歲作第一筆生意,十二歲開始買股,三十一歲成為百萬富豪;
    • 別急著賺大錢,做生意一定要精準,不要不經思索就急著賺小錢;
    • 從奧馬哈到華爾街,從一百美元到世界首富,一代巨人的傳奇,宛如停不下來的雪球!
  • 秘密二:巴菲特的投資法則與經營智慧
    • 終身奉行價值投資法,堅持大量買入、長期持有,不考慮科技網路股;
    • 尋找股價低於營運資金(working capital)的公司;
    • 安全邊際是最重要的。股票也許是擁有一部分公司的權利,你也可以預估股票的內在價值,但唯有安全邊際能讓你夜夜高枕無憂。
    • 巴菲特就用雪茄屁股投資法,只要股價仍低於帳面價值就持續買進,股價上漲就獲利了結。如果股價不漲,他也持續買進,直到掌握公司的控制權,然後拍賣資產,一樣能獲利。
    • 巴菲特後來寫信告訴合夥人,買下「對的公司(擁有對的前景、產業條件和管理等)」也就是公司的「質」就代表股價自己會漲。
    • 若是某家企業的技術非我所能了解,而且此技術對投資決定很重要,我們就不投資這家企業。
    • 若某家企業很可能發生重大人事問題,就算會有豐厚的獲利,我們也不投資。
    • 先評估投資的內在價值、降低風險、運用安全邊際原則收購、集中投資、保持在自己的能力範圍內,並讓這些投資以利滾利,不斷複合成長。或許任誰都了解這些簡單的概念,卻鮮少有人能具體落實。儘管巴菲特讓這個過程看起來經而易舉,但這背後的技術和紀律,卻需要他與旗下員工付出極大的心力。
    • 是投資?還是投機?是危機?還是轉機?巴菲特教你如何判斷股市,通往財富!
    • 巴菲特曾告訴子女,「建立名聲要花一輩子,但讓名譽掃地只需五分鐘。」,我認為投資方法對,建立一生的財富是成長與累積的,若投資方法不對,建立一生的財富極可能短時間賠光或損失慘重。
  • 秘密三:一生受用無窮的超實用附錄集
    • 智慧語錄、生涯年表、三大經典演講:輕鬆應付理財難題,化解人生瓶頸!
    • 持股一覽、子公司列表、致股東信:領悟成功經營者的致富王道!

從可口可樂、吉列、美國運通到沃爾瑪,這些巴菲特情有獨鍾的企業,都與生活息息相關,巴菲特始終相信,不論世界再怎麼變,人們還是會喝可口可樂,還是會刮鬍子,也還是會上商場買東西,因此他十分願意購買這樣的股票,並且大量持有。事實上,這些投資也從不讓巴菲特失望,截至今日,可口可樂股票已讓巴菲特賺了超過100億美元,美國運通則帶來70億的獲利,在吉列帳面的投資也擁有37億的回報,至於沃爾瑪,打從巴菲特買進前股價就一路上漲,它的高成長與獲利同樣讓股神愛不釋手,因此,波克夏也持續地增持沃爾瑪股票,如今,沃爾瑪已是波克夏旗下第六大搖錢樹,結果再一次證明了巴菲特獨到的眼光。

投資股票一定有比較多的利潤,否則年收入達千萬者,所擁有的億萬資產,從何而來?不過,有那麼高的投資報酬率,必然有「它」的風險存在。投資與投機,筆者認為是對等的,只是如何將投資與投機的風險降為最低,又如何做到在最佳的機會中,創造出最大的利潤。投資是以較長的時間取得「它」的報酬率,而投機則是在極短時間內,獲取最大的利潤,股票屬於投資。金融衍生性的商品如:期貨、選擇權、權證則是比較投機的行為,然而二者之間,最終目標還是為了追求高報酬率。總之,我認為2015美股、台股都將進入歷史高點等待 2015 下半年 ~ 2016 另一回檔大修正時間,學習如何做巴菲特價值投資選股,再配合資金分配方法定指標定額投資法選擇最有智慧的買點來投資獲利,其實是難得的學習機會,依我自已投資時程表:2014 ~ 2015 都是逐步賣出長期投資的股票,慢慢調整至固定配息美元保險資產,同時2014 ~ 2016 部份資金將分配於短期人民幣票卷、避險基金,等待市場恐慌大跌後,價值投資之定指標定額買進點才易出現。( 参考 : 華倫·巴菲特 Wiki 及 股市祖師爺合法投機秘笈 - 鄭瑞清 )。


2015年2月25日 星期三

神奇熱敷法:一條毛巾治好痠痛!「高體溫」讓你年輕10歲! - 一本很實用書 ( Magical hot towel method: a hot towel to cure sore ! "High temperature" makes you 10 years younger! )

神奇熱敷法:一條毛巾治好痠痛!「高體溫」讓你年輕10歲!

「低體溫」讓你常常生病!原因不明的痠痛與不適,都與「體溫」有關

  許多人沒有生病也沒有受傷,卻長期苦於原因不明的疼痛、暈眩。肩膀嚴重痠痛、身體老是覺得無力……下降的體溫就是造成身體不適的原因。本書是《一個動作治好腰痛》的作者吉田始史的最新著作,主張不打針、不吃藥自然療法的他,這次要用「一條毛巾」為大家打造不生病的身體!只要「溫暖身體局部」讓血液中某處就能提高自癒力,告別慢性疼痛、打造不衰老的體質!

醫學證實:體溫下降一度,免疫力會降低三成

  「體溫」可說是判斷一個人身體狀態的健康指標。身體感到不適或疼痛的患者,絕對都有低體溫問題,或是正常體溫比以前低。協助身體發揮作用的細胞,在三十七度左右活性最強。醫學證實人的體溫下降一度,免疫力就降低三成。身體與生俱來的「自然治癒力」也會隨之下降,不只變得容易感冒,還會出現過敏症狀,感到身體不適且遲遲無法治癒。

體溫低於36度就有健康危機!【低溫族】快速自我檢測!

  當體溫下降,血液循環變差、免疫力降低,人便容易生病!一般人的正常體溫為36.5~37.1度,「低體溫」則指體溫低於36度。低體溫問題好發於運動不足導致肌肉量較少、身材偏瘦的人、高齡銀髮族、減肥的人、飲食不均衡及壓力大的人。

你是低體溫族群嗎?居家「神奇熱敷法」解決身體小毛病

「體溫」可說是判斷一個人身體狀態的健康指標。醫學證實人的體溫下降1度,免疫力就降低3成。「低體溫」的人,與生俱來的「自然治癒力」也會隨之下降,不只變得容易感冒,還會出現過敏症狀,感到身體不適且遲遲無法治癒。你可能會有疑問:「我不會覺得冷,為什麼身體還是常常痠痛不斷?」其實「虛寒體質」和「低體溫」是不一樣的,虛寒體質是指體表的一部份變冷,例如:手腳冰冷,自己會有明顯感覺,而低體溫則是體內溫度較低,不會有「冰冷感」,兩種狀況都有害身體健康,不過相較之下,「低體溫」對健康造成的傷害比虛寒體質更嚴重!一般人的正常體溫為36.5~37.1度,「低體溫」則指體溫低於36度,尤其是運動不足導致肌肉量較少、身材偏瘦的人、高齡銀髮族、減肥的人、飲食不均衡及壓力大的人會有「低體溫」的困擾。

《神奇熱敷法》的作者吉田始史認為,許多人沒有生病也沒有受傷,卻長期苦於原因不明的疼痛、暈眩。肩膀嚴重痠痛、身體老是覺得無力…,下降的體溫就是造成身體不適的原因。只要利用毛巾熱敷,「溫暖身體局部」讓血液中某處就能提高自癒力,告別慢性疼痛、打造不衰老的體質,甚至「高體溫」還能讓你年輕10歲!書中吉田始史針對最多人苦惱的27種症狀,一一說明熱敷部位,例如:

耳鳴、暈眩症──因自律神經失調引起,並伴隨肩痛、頭痛
建議熱敷部位:後頸至耳後、左右肩胛骨。

坐骨神經痛──久痛不癒,上班族及孕婦最困擾的常見毛病
建議熱敷部位:恥骨部位、臀部肌肉,可利用手肘泡熱水及手掌反射按壓做為急救妙方。

前列腺肥大症──殘尿感、下腹部不適,男性有排尿障礙
熱敷【股間】處,將毛巾放在椅子坐上熱敷,利用體重增加服貼度。

但在天冷時期,熱毛巾總是無法達到長效保溫,三五分鐘就得重新加熱好麻煩。吉田始史教你三步驟,自己做「長效熱敷巾」,溫熱效果長達30分鐘!
1、將毛巾完全浸泡在沸騰熱水中。
2、帶上隔熱手套,將毛巾稍微擰乾就好,擰太乾會加速冷卻速度。
3、放入塑膠袋裡,再用乾毛巾包覆即可開始進行熱敷。

另外,吉田始史建議,每次「針對一個疼痛部位」集中熱敷效果最好。若想紓緩2~3個症狀,請從「下半身」的熱敷點開始進行。熱敷點在背部時,「仰躺」較能維持毛巾熱度,導熱效果也比較好,熱敷後腦勺與頸部時,可用靠枕固定頭部,讓熱毛巾緊密服貼頭頸曲線。熱敷時,不必脫衣服,「隔著衣服熱敷」即可,採取自己覺得輕鬆的姿勢,坐著或躺著都可以進行,但只要感覺熱毛巾稍有冷卻就要立即取下,否則會降低熱敷消痛的效果。

體力差的人,最適合用「熱敷」改善不適

坊間有許多提高體溫的方法,有些是提高全身體溫、有些則以提高局部體溫為訴求。無論如何,溫暖身體對提高體溫絕對有幫助。

不過,大家要理解提高全身體溫和提高局部體溫,各有不同功效。「泡澡」是溫暖全身最簡單的方法,泡溫水澡可讓血液流往肌膚,促進血液循環,同時提高生熱力,進而提升免疫力。此外,雖然只有短暫功效,但水壓還能提高肝功能。值得注意的是,出浴後隨著體溫下降,這些效果也會跟著消失。

根據我的個人經驗,身體感到不適或疼痛的患者,都有低體溫的現象,或是正常體溫比以前低。患者原本只感到輕微不適或疼痛,但為了增加肌力而做運動,卻為身體帶來更多負擔。更糟的是,連泡澡也會讓體力吃不消。其實溫暖全身需要花費時間和體力,並非一件簡單的事情。

從這一點來看,溫暖身體局部的方法較適合沒有體力的人,以及臥病在床的老年人。「溫暖身體局部」時,加溫局部的血管擴張,血液量增加,同時有更多氧氣與養分聚集於該處,因此可以迅速紓緩不適與疼痛。

上述原理告訴我們,能迅速紓緩不適與疼痛的,不是藥物與手術這些「外來」力量,而是人體與生俱來的「自然治癒力」。自然治癒力既沒有副作用,也完全不花一毛錢。

想在日常生活中提高體溫,最理想的情形就是同時並用溫暖全身與局部的方法。話說回來,對於一個身體健康的人而言,光是想到每天要做這個那個的養生法,就會覺得很麻煩,更別說是慢性疼痛患者、體力不佳的人以及老年人了。只要養成溫暖身體局部的習慣,就能減輕不適與疼痛症狀,還能培養生熱力,成功打造「不衰老的體質」。

「長效熱敷巾」的製作步驟

進行毛巾熱敷法前,當場製作長效熱敷巾。
製作長效熱敷巾需要使用剛煮沸的熱水,請全程戴上隔熱手套,小心操作,避免燙傷。

Step1:將毛巾泡在熱水裡

在大臉盆裡倒入剛煮沸的熱水(90~96℃),再放入需要的毛巾數量,將毛巾完全泡在熱水裡,使其充分吸收熱水。

Step2:適度擰乾毛巾

  1. 雙手戴上隔熱手套,慢慢拿出毛巾擰乾。如果將水分完全擰乾,毛巾熱度很快就會消失,因此只要稍微擰過,讓水不會滴落即可。
  2. 擰太乾會加速毛巾冷卻。毛巾一定要含有適度水分,放進塑膠袋裡熱水不會滴出即可。

Step3:將毛巾放入塑膠袋並適度對折

  1. 將擰過的毛巾折成二或三折,放入塑膠袋裡。
  2. 盡量擠出袋子裡的空氣,將塑膠袋的袋口往內折約 2 公分。用乾毛巾包覆整個塑膠袋。戴著隔熱手套時,如果覺得毛巾太燙,請先放置一段時間,等到熱度稍退,不會燙傷時再包覆。
  3. 折成適合「熱敷點」的大小。
Step4:用乾毛巾包覆
用乾毛巾包覆整個塑膠袋。帶著隔熱手套時,如果覺得毛巾太燙,請先放置一段時間,等到熱度稍退,不會燙傷時再包覆。

用微波爐製作「長效熱敷巾」

  1. 用水浸濕毛巾,擰乾至不滴水的程度。
  2. 用保鮮膜包得鬆鬆的,放入微波爐裡,加熱1分鐘。
  3. 後續步驟和使用熱水製作長效熱敷巾時相同。

加熱時間請依微波爐功率適度調整。
用微波爐做的熱毛巾,大約2分鐘就會冷卻,如果想要獲得最好的熱敷效果,每個熱敷點熱敷2分鐘後,請換另一條新的熱毛巾繼續敷,總計熱敷10分鐘。 ..... 持續閱讀

2015年1月7日 星期三

從機器學習到人類社會結構的革命 - 20年內巨型雲端機器人將改變人類社會 ( From machine learning to human social structures Revolution - the giant cloud robot will change human society within 20 years )

It used to be that if you wanted to get a computer to do something new, you would have to program it. Now, programming, for those of you here that haven't done it yourself, requires laying out in excruciating detail every single step that you want the computer to do in order to achieve your goal. Now, if you want to do something that you don't know how to do yourself, then this is going to be a great challenge.

So this was the challenge faced by this man, Arthur Samuel. In 1956, he wanted to get this computer to be able to beat him at checkers. How can you write a program, lay out in excruciating detail, how to be better than you at checkers? So he came up with an idea: he had the computer play against itself thousands of times and learn how to play checkers. And indeed it worked, and in fact, by 1962, this computer had beaten the Connecticut state champion.

So Arthur Samuel was the father of machine learning, and I have a great debt to him, because I am a machine learning practitioner. I was the president of Kaggle, a community of over 200,000 machine learning practictioners. Kaggle puts up competitions to try and get them to solve previously unsolved problems, and it's been successful hundreds of times. So from this vantage point, I was able to find out a lot about what machine learning can do in the past, can do today, and what it could do in the future. Perhaps the first big success of machine learning commercially was Google. Google showed that it is possible to find information by using a computer algorithm, and this algorithm is based on machine learning. Since that time, there have been many commercial successes of machine learning. Companies like Amazon and Netflix use machine learning to suggest products that you might like to buy, movies that you might like to watch. Sometimes, it's almost creepy. Companies like LinkedIn and Facebook sometimes will tell you about who your friends might be and you have no idea how it did it, and this is because it's using the power of machine learning. These are algorithms that have learned how to do this from data rather than being programmed by hand.

This is also how IBM was successful in getting Watson to beat the two world champions at "Jeopardy," answering incredibly subtle and complex questions like this one. ["The ancient 'Lion of Nimrud' went missing from this city's national museum in 2003 (along with a lot of other stuff)"] This is also why we are now able to see the first self-driving cars. If you want to be able to tell the difference between, say, a tree and a pedestrian, well, that's pretty important. We don't know how to write those programs by hand, but with machine learning, this is now possible. And in fact, this car has driven over a million miles without any accidents on regular roads.

So we now know that computers can learn, and computers can learn to do things that we actually sometimes don't know how to do ourselves, or maybe can do them better than us. One of the most amazing examples I've seen of machine learning happened on a project that I ran at Kaggle where a team run by a guy called Geoffrey Hinton from the University of Toronto won a competition for automatic drug discovery. Now, what was extraordinary here is not just that they beat all of the algorithms developed by Merck or the international academic community, but nobody on the team had any background in chemistry or biology or life sciences, and they did it in two weeks. How did they do this? They used an extraordinary algorithm called deep learning. So important was this that in fact the success was covered in The New York Times in a front page article a few weeks later. This is Geoffrey Hinton here on the left-hand side. Deep learning is an algorithm inspired by how the human brain works, and as a result it's an algorithm which has no theoretical limitations on what it can do. The more data you give it and the more computation time you give it, the better it gets.

The New York Times also showed in this article another extraordinary result of deep learning which I'm going to show you now. It shows that computers can listen and understand.

(Video) Richard Rashid: Now, the last step that I want to be able to take in this process is to actually speak to you in Chinese. Now the key thing there is, we've been able to take a large amount of information from many Chinese speakers and produce a text-to-speech system that takes Chinese text and converts it into Chinese language, and then we've taken an hour or so of my own voice and we've used that to modulate the standard text-to-speech system so that it would sound like me. Again, the result's not perfect. There are in fact quite a few errors. (In Chinese) (Applause) There's much work to be done in this area. (In Chinese)

Jeremy Howard: Well, that was at a machine learning conference in China. It's not often, actually, at academic conferences that you do hear spontaneous applause, although of course sometimes at TEDx conferences, feel free. Everything you saw there was happening with deep learning. (Applause) Thank you. The transcription in English was deep learning. The translation to Chinese and the text in the top right, deep learning, and the construction of the voice was deep learning as well.

So deep learning is this extraordinary thing. It's a single algorithm that can seem to do almost anything, and I discovered that a year earlier, it had also learned to see. In this obscure competition from Germany called the German Traffic Sign Recognition Benchmark, deep learning had learned to recognize traffic signs like this one. Not only could it recognize the traffic signs better than any other algorithm, the leaderboard actually showed it was better than people, about twice as good as people. So by 2011, we had the first example of computers that can see better than people. Since that time, a lot has happened. In 2012, Google announced that they had a deep learning algorithm watch YouTube videos and crunched the data on 16,000 computers for a month, and the computer independently learned about concepts such as people and cats just by watching the videos. This is much like the way that humans learn. Humans don't learn by being told what they see, but by learning for themselves what these things are. Also in 2012, Geoffrey Hinton, who we saw earlier, won the very popular ImageNet competition, looking to try to figure out from one and a half million images what they're pictures of. As of 2014, we're now down to a six percent error rate in image recognition. This is better than people, again.

So machines really are doing an extraordinarily good job of this, and it is now being used in industry. For example, Google announced last year that they had mapped every single location in France in two hours, and the way they did it was that they fed street view images into a deep learning algorithm to recognize and read street numbers. Imagine how long it would have taken before: dozens of people, many years. This is also happening in China. Baidu is kind of the Chinese Google, I guess, and what you see here in the top left is an example of a picture that I uploaded to Baidu's deep learning system, and underneath you can see that the system has understood what that picture is and found similar images. The similar images actually have similar backgrounds, similar directions of the faces, even some with their tongue out. This is not clearly looking at the text of a web page. All I uploaded was an image. So we now have computers which really understand what they see and can therefore search databases of hundreds of millions of images in real time.

So what does it mean now that computers can see? Well, it's not just that computers can see. In fact, deep learning has done more than that. Complex, nuanced sentences like this one are now understandable with deep learning algorithms. As you can see here, this Stanford-based system showing the red dot at the top has figured out that this sentence is expressing negative sentiment. Deep learning now in fact is near human performance at understanding what sentences are about and what it is saying about those things. Also, deep learning has been used to read Chinese, again at about native Chinese speaker level. This algorithm developed out of Switzerland by people, none of whom speak or understand any Chinese. As I say, using deep learning is about the best system in the world for this, even compared to native human understanding.

This is a system that we put together at my company which shows putting all this stuff together. These are pictures which have no text attached, and as I'm typing in here sentences, in real time it's understanding these pictures and figuring out what they're about and finding pictures that are similar to the text that I'm writing. So you can see, it's actually understanding my sentences and actually understanding these pictures. I know that you've seen something like this on Google, where you can type in things and it will show you pictures, but actually what it's doing is it's searching the webpage for the text. This is very different from actually understanding the images. This is something that computers have only been able to do for the first time in the last few months.

So we can see now that computers can not only see but they can also read, and, of course, we've shown that they can understand what they hear. Perhaps not surprising now that I'm going to tell you they can write. Here is some text that I generated using a deep learning algorithm yesterday. And here is some text that an algorithm out of Stanford generated. Each of these sentences was generated by a deep learning algorithm to describe each of those pictures. This algorithm before has never seen a man in a black shirt playing a guitar. It's seen a man before, it's seen black before, it's seen a guitar before, but it has independently generated this novel description of this picture. We're still not quite at human performance here, but we're close. In tests, humans prefer the computer-generated caption one out of four times. Now this system is now only two weeks old, so probably within the next year, the computer algorithm will be well past human performance at the rate things are going. So computers can also write.

So we put all this together and it leads to very exciting opportunities. For example, in medicine, a team in Boston announced that they had discovered dozens of new clinically relevant features of tumors which help doctors make a prognosis of a cancer. Very similarly, in Stanford, a group there announced that, looking at tissues under magnification, they've developed a machine learning-based system which in fact is better than human pathologists at predicting survival rates for cancer sufferers. In both of these cases, not only were the predictions more accurate, but they generated new insightful science. In the radiology case, they were new clinical indicators that humans can understand. In this pathology case, the computer system actually discovered that the cells around the cancer are as important as the cancer cells themselves in making a diagnosis. This is the opposite of what pathologists had been taught for decades. In each of those two cases, they were systems developed by a combination of medical experts and machine learning experts, but as of last year, we're now beyond that too. This is an example of identifying cancerous areas of human tissue under a microscope. The system being shown here can identify those areas more accurately, or about as accurately, as human pathologists, but was built entirely with deep learning using no medical expertise by people who have no background in the field. Similarly, here, this neuron segmentation. We can now segment neurons about as accurately as humans can, but this system was developed with deep learning using people with no previous background in medicine.

So myself, as somebody with no previous background in medicine, I seem to be entirely well qualified to start a new medical company, which I did. I was kind of terrified of doing it, but the theory seemed to suggest that it ought to be possible to do very useful medicine using just these data analytic techniques. And thankfully, the feedback has been fantastic, not just from the media but from the medical community, who have been very supportive. The theory is that we can take the middle part of the medical process and turn that into data analysis as much as possible, leaving doctors to do what they're best at. I want to give you an example. It now takes us about 15 minutes to generate a new medical diagnostic test and I'll show you that in real time now, but I've compressed it down to three minutes by cutting some pieces out. Rather than showing you creating a medical diagnostic test, I'm going to show you a diagnostic test of car images, because that's something we can all understand.

So here we're starting with about 1.5 million car images, and I want to create something that can split them into the angle of the photo that's being taken. So these images are entirely unlabeled, so I have to start from scratch. With our deep learning algorithm, it can automatically identify areas of structure in these images. So the nice thing is that the human and the computer can now work together. So the human, as you can see here, is telling the computer about areas of interest which it wants the computer then to try and use to improve its algorithm. Now, these deep learning systems actually are in 16,000-dimensional space, so you can see here the computer rotating this through that space, trying to find new areas of structure. And when it does so successfully, the human who is driving it can then point out the areas that are interesting. So here, the computer has successfully found areas, for example, angles. So as we go through this process, we're gradually telling the computer more and more about the kinds of structures we're looking for. You can imagine in a diagnostic test this would be a pathologist identifying areas of pathosis, for example, or a radiologist indicating potentially troublesome nodules. And sometimes it can be difficult for the algorithm. In this case, it got kind of confused. The fronts and the backs of the cars are all mixed up. So here we have to be a bit more careful, manually selecting these fronts as opposed to the backs, then telling the computer that this is a type of group that we're interested in.

So we do that for a while, we skip over a little bit, and then we train the machine learning algorithm based on these couple of hundred things, and we hope that it's gotten a lot better. You can see, it's now started to fade some of these pictures out, showing us that it already is recognizing how to understand some of these itself. We can then use this concept of similar images, and using similar images, you can now see, the computer at this point is able to entirely find just the fronts of cars. So at this point, the human can tell the computer, okay, yes, you've done a good job of that.

Sometimes, of course, even at this point it's still difficult to separate out groups. In this case, even after we let the computer try to rotate this for a while, we still find that the left sides and the right sides pictures are all mixed up together. So we can again give the computer some hints, and we say, okay, try and find a projection that separates out the left sides and the right sides as much as possible using this deep learning algorithm. And giving it that hint -- ah, okay, it's been successful. It's managed to find a way of thinking about these objects that's separated out these together.

So you get the idea here. This is a case not where the human is being replaced by a computer, but where they're working together. What we're doing here is we're replacing something that used to take a team of five or six people about seven years and replacing it with something that takes 15 minutes for one person acting alone.

So this process takes about four or five iterations. You can see we now have 62 percent of our 1.5 million images classified correctly. And at this point, we can start to quite quickly grab whole big sections, check through them to make sure that there's no mistakes. Where there are mistakes, we can let the computer know about them. And using this kind of process for each of the different groups, we are now up to an 80 percent success rate in classifying the 1.5 million images. And at this point, it's just a case of finding the small number that aren't classified correctly, and trying to understand why. And using that approach, by 15 minutes we get to 97 percent classification rates.
So this kind of technique could allow us to fix a major problem, which is that there's a lack of medical expertise in the world. The World Economic Forum says that there's between a 10x and a 20x shortage of physicians in the developing world, and it would take about 300 years to train enough people to fix that problem. So imagine if we can help enhance their efficiency using these deep learning approaches?

So I'm very excited about the opportunities. I'm also concerned about the problems. The problem here is that every area in blue on this map is somewhere where services are over 80 percent of employment. What are services? These are services. These are also the exact things that computers have just learned how to do. So 80 percent of the world's employment in the developed world is stuff that computers have just learned how to do. What does that mean? Well, it'll be fine. They'll be replaced by other jobs. For example, there will be more jobs for data scientists. Well, not really. It doesn't take data scientists very long to build these things. For example, these four algorithms were all built by the same guy. So if you think, oh, it's all happened before, we've seen the results in the past of when new things come along and they get replaced by new jobs, what are these new jobs going to be? It's very hard for us to estimate this, because human performance grows at this gradual rate, but we now have a system, deep learning, that we know actually grows in capability exponentially. And we're here. So currently, we see the things around us and we say, "Oh, computers are still pretty dumb." Right? But in five years' time, computers will be off this chart. So we need to be starting to think about this capability right now.

We have seen this once before, of course. In the Industrial Revolution, we saw a step change in capability thanks to engines. The thing is, though, that after a while, things flattened out. There was social disruption, but once engines were used to generate power in all the situations, things really settled down. The Machine Learning Revolution is going to be very different from the Industrial Revolution, because the Machine Learning Revolution, it never settles down. The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before, so your previous understanding of what's possible is different.

This is already impacting us. In the last 25 years, as capital productivity has increased, labor productivity has been flat, in fact even a little bit down.

So I want us to start having this discussion now. I know that when I often tell people about this situation, people can be quite dismissive. Well, computers can't really think, they don't emote, they don't understand poetry, we don't really understand how they work. So what? Computers right now can do the things that humans spend most of their time being paid to do, so now's the time to start thinking about how we're going to adjust our social structures and economic structures to be aware of this new reality. Thank you!