顯示具有 英文短影片 標籤的文章。 顯示所有文章
顯示具有 英文短影片 標籤的文章。 顯示所有文章

2015年2月17日 星期二

超人類生物學之未來:生物學、奈米科技、人工智慧結合成超人類,Ray Kurzweil ( The future of the super-human biology: nanotechnology, biology, artificial intelligence combined into super-human

    首先,我想與大家分享一個故事。 時鐘撥回到兩億年前, 我們的故事, 與新皮層 (neocortex) 有關。 早期哺乳動物 (實際上只有哺乳動物才有新皮層) 比如齧齒類動物, 擁有一種尺寸和厚度與郵票相當的新皮層, 它像一層薄膜, 包覆著這些動物核桃大小的大腦。 新皮層的功能不可小覷, 它賦予動物新的思考能力。 不像非哺乳類動物, 牠們的行為基本上固定不變, 擁有新皮層的哺乳動物能發明新的行為。 比如,老鼠逃避天敵的追捕時, 一旦發現此路不通, 牠會嘗試去找新的出路。 最終可能逃之夭夭,也可能落入貓口, 但僥倖成功時,牠會記取成功的經驗, 最終形成一種新的行為。 值得一提的是,這種新近習得的行為, 會迅速傳遍整個鼠群。 我們可以想像,一旁觀望的老鼠會說: “哇,真是急中生智,居然想到繞開石頭來逃生!” 然後,輕而易舉也掌握了這種技能。

但是,非哺乳動物 對此完全無能為力, 牠們的行為一成不變。 準確地說,牠們也能習得新的行為, 但不是在一朝一夕之間, 可能需要歷經一千個世代, 整個種群才能形成一種新的固定行為。 在兩億年前的蠻荒世界, 這種進化節奏並無大礙。 那時,環境變遷步履蹣跚, 大約每一萬年, 才發生一回滄海桑田的巨變, 在這樣一個漫長的時間跨度裏, 動物才形成了一種新的行為。

往後,一切安好。 直到,禍從天降。 時間快進到6500萬年前, 地球遭遇一場突如其來的環境遽變, 後人稱之為“白堊紀物種大滅絕”。 恐龍遭受滅頂之災; 75%的地球物種 走向滅絕; 而哺乳動物 趁機佔領了其他物種的生存地盤。 我們可以假託這些哺乳動物的口吻, 來評論這一進化過程: “唔,關鍵時候我們的新皮層真派上用場了。” 此後,新皮層繼續發育。 哺乳動物個頭也日漸見長, 大腦容量迅速擴大, 其中新皮層的發育堪稱突飛猛進, 已經逐步形成獨特的溝回和褶皺, 這可以進一步增加其表面積。 人類的新皮層, 如果充分展開平鋪, 尺寸可達一張餐巾大小。 但它仍然保持了纖薄的結構, 厚度也與餐巾不相上下。 外形曲折複雜,呈現千溝萬壑, 新皮層已佔據大腦體積的80%左右, 不僅肩負思考的重任, 還約束和昇華個人的行為。 今天,我們的大腦 仍然製造原始的需求和動機。 但是,對於我們內心狂野的征服欲望, 這個新皮層起著春風化雨、潤物無聲的作用, 最終將這種欲望化作創造詩歌、開發APP、 甚至是發表TED演講這樣的文明行為。 對於這一切, 新皮層功不可沒。

50年前,我完成了一篇論文, 探究大腦的工作原理, 我認為大腦是一系列模塊的有機結合。 每個模塊按照某種模式各司其職, 但也可以學習、記憶新的模式, 並將模式付諸應用。 這些模式以層級結構進行組織, 當然,我們借助自己的思考 假設了這種層級結構。 50年前,由於各種條件限制, 研究進展緩慢, 但這項成果使我獲得了 約翰遜總統的接見。 50年來,我一直潛心研究這個領域, 就在一年半前,我又發表了一部新的著作 ——《心智的構建》。 該專著探討了同一個課題, 幸運的是,我現在擁有充足的證據支撐。 神經科學為我們貢獻 大量有關大腦的數據, 還在以逐年翻倍的速度劇增; 各種腦部掃描技術的空間解析度, 也在逐年翻倍。 現在,我們能親眼窺見活體大腦的內部, 觀察單個神經間的連接, 目睹神經連接、觸發的實時發生。 我們親眼看到大腦如何創造思維, 或者反過來說,思維如何增強和促進大腦, 思維本身對大腦進化至關重要。

接下來,我想簡單介紹大腦的工作方式。 實際上,我統計過這些模塊的數量。 我們總共有大約三億模塊, 分佈在不同的層級中。 讓我們來看一個簡單的例子。 假設我有一組模塊, 可以識別大寫字母“A”中間的短橫線, 它們的主要職責就在於此。 無論周遭播放著美妙的音樂, 還是一位妙齡女郎翩然而至, 它們都渾然不覺。但是,一旦發現“A”的短橫線, 它們就興奮異常,異口同聲喊出:“短橫線!” 同時,它們立即報告神經軸突, 識別任務已經順利完成。 接下來,更高級別的模塊—— 概念級別的模塊,將依次登場。 級別越高,思考的抽象程度越高。 例如,較低的級別可識別字母“A”, 逐級上升後,某個級別能識別“APPLE”這個單詞。 同時,信息也在持續傳遞。 負責識別“APPLE”的級別,發現A-P-P-L時, 它會想:“唔,我猜下一個字母應該是E吧。” 然後,它會將信號傳達到 負責識別“E”的那些模塊, 並發出預警:“嘿,各位注意, 字母E就要出現了!” 字母“E”的識別模塊於是降低了閥值, 一旦發現疑似字母,便認為是“E”。 當然,這並非通常情況下的處理機制, 但現在我們正在等待“E”的出現, 而疑似字母與它足夠相似, 所以,我們斷定它就是“E”。 “E”識別後,“APPLE”識別成功。

如果我們再躍升五個級別, 那麼,在整個層級結構上, 就到達了較高水平。 這個水平上,我們具有各種感知功能, 某些模塊能夠感知特定的布料質地, 辨識特定的音色,甚至嗅到特定的香水味, 然後告诉我:妻子剛進到房间!

再上升10級, 我們就到達了一個很高的水平, 可能來到了額葉皮層。 在這兒,我們的模塊已經能夠臧否人物了, 比如:這事有點滑稽可笑!她真是秀色可餐!

大家可能覺得,這整個過程有點複雜。 實際上,更讓人費解的是 是這些過程的層級結構。 曾經有位16歲的姑娘,當時正接受腦部手術。 由於手術過程中醫生需要跟她講話, 所以就讓她保持清醒。 保持清醒的意識,這對於手術並無妨礙, 因為大腦內沒有痛覺感受器。 我們驚奇地發現,當醫生刺激新皮層上 某些細小區域時,就是圖中的紅色部位, 這個姑娘就會放聲大笑。 起初,大家以為, 可能是因為觸發了笑反應神經。 他們很快意識到事實並非如此, 這些新皮層上的特定區域能夠理會幽默, 只要醫生刺激這些區域, 她就會覺得所有的一切都滑稽有趣。 “你們這幫人光站在那裏,就讓人想笑。” 那位姑娘典型的解釋道。 我們知道,這個場景並不滑稽可笑, 因為大家都在進行緊張的手術。

現在,我們又有哪些新的進展呢? 計算機日益智能化, 利用功能類似新皮層的先進技術, 它們可以學習和掌握人類的語言。 我曾描述過一種算法, 與層級隱含式馬爾可夫模型類似, (馬爾可夫模型是用於自然語言處理的統計模型) 上世紀90年以來我一直研究這種算法。 “Jeopardy”(危境)是一個 自然語言類的智力競賽節目, IBM研發的沃森計算機在比賽中 勇奪高分,總分超過兩名最佳選手的總和。 連這個難題都被它輕鬆化解了: “定義:由起泡的派餡料發表的冗長而乏味的演講。 請問:這定義的是什麼?” 它迅速回答道:愛開腔的蛋白霜。 而詹尼斯和另外一名選手卻一頭霧水。 這個問題難度很大,極富挑戰性, 向我們展示了計算機 正在掌握人類的語言。 實際上,沃森是通過廣泛閱讀維基百科 及其他百科全書來發展語言能力的。

5至10年以後, 我們的搜索引擎 不再只是搜索詞語和鏈接這樣的簡單組合, 它會嘗試去理解信息, 通過涉獵浩如煙海的互聯網和書籍, 攫取和提煉知識。 想像有一天,你正在悠閒地散步, 智能設備端的 Google 助理突然和你說: “瑪麗,你上月提到,正在服用的谷胱甘肽補充劑 因為無法透過血腦屏障,所以暫時不起作用。 告訴你一個好消息!就在13秒鐘前, 一項新的研究成果表明, 可以透過一个新的途徑來補充谷胱甘肽。 讓我給你概括一下這個報告。”

20年以後,我們將迎來奈米機器人, 目前,科技產品正在日益微型化, 這一趨勢愈演愈烈。 科技設備將通過毛細血管 進入我們的大腦, 最終,將我們自身的新皮層 與雲端的人工合成新皮層相連, 使它成為新皮層的延伸和擴展。 今天, 智慧型手機都內置了一台計算機。 假如我們需要一萬台計算機, 在幾秒鐘內完成一次複雜的搜索, 我們可以通過訪問雲端來獲得這種能力。 到了2030年,當你需要更加強大的新皮層時, 你可以直接從你的大腦連接到雲端, 來獲得超凡的能力。 舉個例子,我正在漫步,遠遠看到一個人。 “老天,那不是克里斯.安德森(TED主持人)嗎? 他正朝我這邊走來。 我要抓住這個機遇,一鳴驚人! 但是,我只有三秒鐘, 我新皮層的三億個模塊 顯然不夠用。 我需要借來10億模塊增援!” 於是,我會立即連通雲端。 我的思考,綜合了生物體和非生物體 這兩者的優勢。 非生物部分的思考能力, 將受益於“加速回報定律”, 這是說,科技帶來的回報 呈指數級增長,而非線性。 大家是否還記得,上次新皮層大幅擴張時 發生了哪些重大變化? 那是200萬年前, 我們那時還只是猿人, 開始發育出碩大的前額。 而其他靈長類動物的前額向後傾斜, 因為牠們沒有額葉皮層。 但是,額葉皮層並不意味著質的變化; 而是新皮層量的提升, 帶來了額外的思考能力, 最終促成了質的飛躍。 我們因而能夠發明語言, 創造藝術,發展科技, 並舉辦TED演講, 這都是其他物種難以完成的創舉。

我相信未來數十年, 我們將再次創造偉大的奇蹟。 我們將借助科技,再次擴張新皮層, 不同之處在於, 我們將不再受到頭顱空間的局限, 意味著擴張並無止境。 隨之而來的量的增加 在人文和科技領域, 將再次引發一輪質的飛躍。

謝謝大家!



生物學、奈米科技、人工智慧結合成超人類是 Ray Kurzweil 的預測想法

據新浪科技報道,Ray Kurzweil有著一頭淡金髮和飽滿的前額,他有著長者特有的從容,回答問題不緊不慢,會將觀點和事例結合起來解釋令人費解的科技問題。他有五個頭銜——美國作家、電腦科學家、發明家、未來學家和谷歌工程主管,還擁有多達20個榮譽博士學位,以及來自美國總統的最高獎章。
在這五重身份中,他最喜歡的是谷歌技術主管,因為這個角色融合了他擁有的多種身份,同時能夠將技術帶向全世界。「谷歌擁有數十億用戶,因此我們所做的工作能對世界產生直接影響。」他說道。

對於這樣一個具有傳奇色彩的人物,《華爾街日報》稱其為「不知疲倦的天才」,《福布斯》贊其為「終極思考機器」。那麼,除了天賦異稟,他保持創造力、思維超前的秘訣又是什麼?

最想做的事是通過技術改變世界

Ray最為人所稱道的是未來學家身份,針對科技領域他所做的預測時間跨度長達數十年。他的有些預測雖然聽起來匪夷所思、超現實,但正是因為技術正呈指數級增長,未來的確難以想像。

而正如他所創辦的科技網站 Kurzweil Accelerating Intelligence 中一篇文章說的那樣,有些科技預測之所以能夠成真,不僅在於技術呈指數增長,也在於每當一個預測被拋出來,人們才會逐漸關注,不斷投入資金和人才等科研力量,最終促使該領域科技的突破。作為未來學家,Ray最想做的事情就是通過技術改善世界。而他也著書立說,向人們推廣科技的魅力,他的作品《The Age of Spiritual Machines》和《The Singularity Is Near》均是亞馬遜科技類銷量排名第一的書籍。

而他之所以能夠做出大膽超前又充滿價值的預測,在於他本身天賦過人、勤於思考,也在於他在科技領域的數十年的浸潤。不過,人們還是想知道他保持不竭動力、不斷思考、不斷創新的秘訣是什麼。

對此,他有這樣的逆向思維的訣竅:「我會想像5年後如果要發表一次演講,那麼怎麼解釋我的新發明。現在還沒有這樣的發明,但我會去想該說一些什麼,到底解決了什麼問題。隨後我會回頭看,如果未來的演講是這樣,那麼就反推現在要怎麼去做,步驟是怎樣的。這就是發明的方式。」
同時,他想和年輕人分享的經驗是:通過性價比指標去衡量信息技術,根據成本來研究該技術的競爭力。他指出,這樣考慮現實狀況的指標具有很強的可預測性,可以預測三五年後的事情,比如從這一角度思考通信和生物科技領域。

不過,Ray在30年前曾預測過30年後、50年後的事情。他坦言道,儘管不能預測一切,但至少可以預測未來的某些情況。
正在從事的工作是讓電腦理解語言

因為Ray本身是個創業者,所以很多人好奇為何他選擇為谷歌打工。但是源於谷歌能夠幫助他實現自己借助技術改變世界的目標,所以他欣然接受了這份工作。其實,Ray在谷歌的工作也是面向未來的工作:「我在谷歌的技術工作,以及我作為未來學家的身份,兩者之間並沒有實際的區別。因為我們設計面向2018年或2020年的技術,我們會思考未來的技術發展。」Ray正在帶領一個從事自然語言識別項目的團隊,旨在理解語言的涵義。一般而言,人們進行搜索都從關鍵字開始,找到包含該關鍵字的文檔,但搜尋引擎本身並不理解文檔內容的意思。而搜索技術更進一步的發展便是語義理解。

他指出,自然語言識別能力需要理解涵義,以及如何表達這個涵義。而困難在於,我們甚至並不知道對於自然語言而言,「涵義」的定義是什麼。「對於語音辨識,你會試圖去了解某人正在說什麼;對於字元識別,你會去判斷紙上的字母是什麼。不過,如果希望理解紙上內容的含義,那麼要如何去做?」Ray是說,我們需要對電腦進行自然語言識別建立框架。

Ray提出了一個完整的理論,關於人腦中的觀點是如何表達的,以此探討電腦理解語言。為此他最新寫了一部書《How to Create a Mind: The Secret of Human Thought Revealed》探討了人腦的思維,以及人腦中的想法如何表達。比如人們在翻譯過程中,會先知道語言是什麼含義,從而在大腦中創造出字元以反映這些含義,隨後再從另一種語言中尋找與之匹配的文字。不過,今天的語言識別並不是這樣,只是試圖匹配字元序列。

而由Ray帶領的Google Now團隊正在進行自然語言理解的開發工作。他表示,從長期來看,搜尋引擎將可以理解關鍵字的含義。「這是一個長期目標,而我們需要一步步的發展,讓電腦去理解語言。這是我正在從事的工作。」他如是說。
對未來科技的五大預測

Ray對未來科技做出了五大預測,包括人類基因重組、太陽能、3D列印技術、搜尋引擎和虛擬實境等領域。不過有些預測聽起來似乎難以理解,Ray做了詳細的解釋。

1.2020年人體可進行編程重組,屆時人類可遠離疾病、避免衰老。關於個人重新編碼生物信息,是指人類通過醫學介入,改變生物基礎的信息流程。對該預測,很多人持質疑態度,Ray的解釋如下:

比如,某種疾病的造成原因是某一基因的缺失。雖然大部分疾病並不是由某一基因引起的,但這種例外。對於這種疾病,人類可以通過替代這一基因進行治療。這方面的嘗試已經在實踐進行中。通過從身體中提取細胞,在體外添加基因,在顯微鏡下監控過程的進行,對數百萬個細胞進行基因替代,隨後將其注入患者體內,來治癒這一疾病。

全球有許多人患有心臟疾病,對此目前沒有太好的辦法,因為心臟不能自我修復。Ray的父親於60年代患上了心臟病,於1970年去世。而今天,人們可以通過提取細胞並重新編碼,讓心臟自我修復損傷。這樣的例子還有很多。通過改變基因,我們將找到一類全新的治療方式。

此外,在癌症的腫瘤中也可以看到類似的過程。癌症治療可以依靠幹細胞尋找治療方法,幹細胞有著很強的複製能力,可以通過對幹細胞進行重新編碼,阻止這樣的複製,從而抑制癌症發展。目前有數千個這樣的項目,正試圖對基因進行重新編碼,以治療許多與基因有關的疾病。

2.到2029年,人工智能將會超越人類智力,而到了2045年將達到技術奇點,科技將導致超級人工智能機器出現,人類和機器將會更進一步的整合。

對於人工智能,Stephen Hawking 和 Elon Musk 都表達過對其安全性的擔憂,認為人工智能將具有巨大的破壞性。Ray對此回應道:「他似乎是在說,未來5年內將會出現超級人工智能。不過,這很不現實。我的觀點是,人工智能達到人類的水平還需要15年時間,而這一數字可能也是錯的。」
人工智能專家普遍認為,出現超級人工智能需要花費20到30年時間。不過從人類歷史的角度來看,這並不是很長的時間。而他曾刊文指出,在成熟的時間點到來之前,人們可以通過加強人類監督和推進社會機構探討,來保證人工智能的安全性。

實際上,所有的技術都是雙刃劍,有著好壞兩面。「火可以用來取暖、燒飯,也可以燒掉你的房子。另一個例子是生物技術,雖然目前它可以用於治療基因導致的疾病,然而30年前,人們會擔心恐怖分子利用生物技術去改造常見病毒,使其成為致命的武器。」他舉例說道。他指出,在國際上有定期舉辦的生物科技會議,其目標是制定方針確保技術的安全性,預防有意或無意引發的生物技術事故。「這涉及到複雜的機制,而這種機制目前運轉良好,過去30年並未看到什麼問題。我認為,在發展人工智能的過程中,這是一個很好的模式。」

在他看來,未來,人工智能可以與人類文明進行深度整合。人工智能可以獲得人類的全部知識,人類也可以與人工智能對話。人工智能可以去做人類能做的事,具有很高的智慧水平,同時逐漸發展。人工智能將與社會深度整合,而世界將會更加和平。通過更好的通信技術,我們將可以更好地了解對方,這將大量減少世界上的暴力。因此,隨著人工智能成為人類文明的一部分,人類文明將變得更好。

3.近年來火熱的3D列印技術到2020年才會進入黃金時代,大規模取代製造業是5年後的事情。他表示,技術周期都會經歷逐漸成長、迅速爆發、跌至谷底到趨於正常的過程,而3D列印的真正成熟還需要時間。

對此,他解釋道:關於3D列印,我希望人們從中獲得經驗,並實現更好的過渡。有人認為,3D列印明年就能給全世界帶來革命,不過我認為2020年更有可能。目前3D列印已有一些有趣的應用,例如列印人體器官。不過,我們還需要使其更加完美。

4.除了3D列印,還有哪些技術正經歷類似的曲線?他說,虛擬實境是一個類似的技術,在真正取得突破之前還需要幾年時間。到2020年,人類將在完全沉浸式的虛擬環境中交互,2030年代該環境將引入觸覺。

他指出,虛擬實境在遊戲行業非常有趣,3D的虛擬實境能帶來沉浸感。不過要到2020年,人們才會經常使用這一技術,例如將其置於隱形眼鏡中,可以與在遠方的人進行看似面對面的交流,看起來真實感將很強,我們甚至可以觸摸到對方。不過從目前來看,這樣的場景要到2020年才能實現。

5.搜尋引擎在五年內將可理解自然語言。




2015年2月15日 星期日

奧莉薇亞·紐頓-強 成功特質 - 樂觀、競業、才華、聰明投資 ( Olivia Newton-John successful life - Optimistic, Strive, talent and smart investment )

    奧莉薇亞·紐頓·約翰 ( Olivia Newton-John ) 在1948年出生於英國劍橋,她五歲隨家人移居澳洲墨爾本,父親是 Brin Newton-John 教授,母親 Irene Born 是諾貝爾物理學獎得主馬克斯·玻恩的長女。奧莉薇亞在家中排行最小,哥哥休(Huge)是名醫生,姐姐羅娜(Rona)是名演員。

   14歲就與三名同班同學組成了全女子合唱團Sol Four,並經常參加當地電台電視台的演出。後來在《Sing Sing Sing》電視歌唱比賽勝出,所獲獎金得以使她能夠到英國旅行,起初,她不願去,在母親鼓勵下得以成行。1963年,Olivia已經是澳洲電視節目和每週流行音樂節目的常客。1964年,她贏得了全澳洲才藝比賽的冠軍,獲得前往英國的機會。不久之後,Pat由於簽證期滿而返回洲,Olivia留下來獨自繼續奮鬥,受到「英國貓王」Cliff Richard的賞識,應邀參與其巡迴演出,並且合錄了一首單曲,使她受到注意,獲得了唱片合約。1966年,Olivia在Decca 唱片公司旗下發表了首張單曲《』Till You Say You』ll Be Mine》,翻唱了Jackie DeShannon 的名曲。

在1971年,她又翻唱了Bob Dylan的《If Not For You》,在英國得到第七名,美國則是第25名。由John Farrar負責監製,而他也成為了Olivia的長期拍擋合作至今,《Let Me Be There》是Olivia首張在美國出版的專輯,在1973年推出,同名的單曲也成為了她首張十大單曲,這張專輯更替她獲選Academy Of Country Music的「最有前途女歌手」及格林美的「最佳鄉謠歌手」大獎,揭開了她音樂發展刺激的一頁。錄製的時候,它本來只是一首Pop風味的流行歌曲,但是製作人在事後添加了最能代表鄉村音樂特色的Pedal Steel(一種類似吉他的弦樂器),使這首歌得到了鄉村電台DJ們的青睞,紛紛大力推薦,不僅成為她在美國的第一張金唱片,

    1974年,一位來自澳洲、活躍於夜總會、頗有作曲才華的歌手 Peter Allen(曾與女星Liza Minnelli維持過三年婚姻關係),準備錄製他的第一張個人專輯。為他擔任製作的,是另外一位熱門歌壇的作曲好手Jeff Barry。當他們倆共同檢視 Allen 為這張專輯譜寫好的歌曲時,Barry發現似乎沒有什麼特別有希望造成流行的曲調,因此就表示,他有個好點子,如果寫成一首歌曲,由男性歌手來唱,可能會很不錯。於是兩人共同合作,譜出了「I Honestly Love You」這首歌,並且錄製成 Demo。兩人所屬的出版公司正好有人要去向 Olivia Newton-John 推銷新歌,順手就把這首歌帶過去。Olivia 聽了之後,幾乎癱瘓在椅子上,立刻要求要灌錄這首歌。幾經商量之後,Barry 決定把歌曲讓給正在鄉村歌壇大受歡迎的Olivia,不過說也奇怪,唱片公司的人竟然不是很喜歡,所以根本就沒有打算把它發行成單曲,但是廣播界的眾家DJ們卻愛上了這首歌曲,不但紛紛強力放送,還要求唱片公司趕緊
發行單曲。同年十月五日,「I Honestly Love You」登上了排行榜首寶座,為Olivia奪得了她出道以來的第一次冠軍,更於次年二月贏得了「年度最佳唱片」和「年度最佳流行女藝人」兩項葛萊美獎。她一生許多賣座歌曲 總賣出的量達到 1 億 CD。

   在 2012年它看上去像歌手的壯觀的職業生涯接近尾聲。突然,她又回到了歌手上。People 雜誌上週五的錢報告(2月13日),牛頓 - 約翰 ( Olivia Newton-John ) 是世界上收入最高的女歌手,2014年1月和2015年1月,近 5千萬美元領先於她最接近的競爭之間拉了驚人的8千2百萬00美元。

有錢的因素

People 雜誌在編制本年度列表,認為牛頓 - 約翰 ( Olivia Newton-John ) 有錢因素,如前期的薪酬、參與利潤、殘差、代言和廣告的工作。澳大利亞的創作型歌手牛頓 - 約翰 ( Olivia Newton-John ) ,有2.45億美元的淨資產預計。她擁有幸運地聰明股票投資,重大財產持有,使用的 CoverGirl 化妝品利潤豐厚的代言活動。她還擁有數家餐廳(以下簡稱“發牛一個足球隊(以下簡稱“劍橋天使”),推出了自己品牌的伏特加酒(純 Wondernewton 約翰 - 澳大利亞),並應對晚輩市場是最暢銷的香水(用愛張震)和一個名為“奧莉維亞·紐頓 - 約翰霓裳”時裝系列及奧莉維亞·紐頓 - 約翰癌症和健康中心

分析

  • 奧莉薇亞·紐頓-強 ( Olivia Newton-John ) 成功特質 - 樂觀、競業、才華、聰明投資,她擁有投資包括品牌的伏特加酒、品牌的最暢銷的香水、品牌的時裝系列、數家品牌餐廳,成功的經營品牌及行銷自已。
  • 奧莉薇亞·紐頓-強 ( Olivia Newton-John ) 聰明股票投資是有專業顧問及她的獨特眼光,她的投資都是長期投資,再度證明成功投資者都是眼光獨特之長期投資者。長期投資者須據有"用心於不交易定指標定額買進投資之功力。

2015年1月25日 星期日

微軟 HoloLens:科幻虛擬實境裝置將啟動3D全像投影贏Google Glass成為下一代很重要穿載電腦 ( Microsoft HoloLens: A science fiction virtual reality device with 3D holographic projection will win Google Glass has become the next generation wearable computer )

微軟 HoloLens:超科幻的虛擬實境裝置

這個 HoloLens 是一個功能頗有未來感的虛擬實境裝置,它除了能在現實的場景投射各種虛擬
影像,更厲害的是可透強大即時運算效能,讓佩帶者能和虛擬影像直接互動。不論是要放大、縮小、移動物體,或是幫虛擬物體上色等動作通通都能辦到,讓你就像電影《鋼鐵人》中的東尼史塔克一樣無所不能!

HoloLens 是微軟首款虛擬實境裝置,它完全不需要其他配件、外接線路即可獨立運。HoloLens 主要是以 Windows 10 為基礎,並搭配微軟的 Windows Holographic 技術,同時也內建強大的處裡器、繪圖晶片、多種感應器、全像高解析度鏡頭和全像處理單元 HPU ( Holographic Processing Unit),不僅可即時偵測使用者的動作和周圍環境,同時也會即時處理從感應器即時接收到百萬位元組資料,進而讓使用者可直接和和虛擬物件互動,不僅可讓設計師可更直覺的去檢視、修改自己的產品,也能運用在家庭娛樂或是科學研究等多種領域上。

雖然已經發表了 HoloLens,但微軟並不打算藏私,而是選擇和其他虛擬實境裝置的製造商一起分享 Windows Holographic 相關技術。微軟表示所有 Windows 10 系統中都將內建 Holograms API,同時微軟也發表了用來製作 Holograms 內容的 HoloStudio 軟體,讓你可透過虛擬、可互動的工具軟體,自由設計出各種可互動的虛擬實境介面或物體,甚至還可支援把你設計出來的物體,透過 3D 列印技術直接列印出來。

微軟在Windows 10消費端應用說明活動中,同時揭曉可獨立運作頭戴式裝置HoloLens,藉此示範結合自然人機操作介面、虛擬實境顯示與Windows 10等應用模式,並且讓合作廠商也能以此設計不同形式裝置。而在會後接受訪談中,微軟Xbox部門負責人Phil Spencer表示HoloLens定位並非用於對抗Oculus、Sony或三星旗下推出虛擬顯示頭戴裝置,強調雖然HoloLens具備相同或類似功能,但重心還是放在其可獨立運作用於各類領域的使用特性。
在接受訪談中,微軟Xbox部門負責人Phil Spencer表示HoloLens的產品定位,並非用於對抗Oculus、Sony或三星旗下推出虛擬顯示頭戴裝置。雖然HoloLens本身也具備連接或串流源自Windows或Xbox One的影像內容,並且也支援虛擬實境影像顯示,但其重心依然著重可獨立運作用於各類領域的使用特性,並非僅只是定位在單一顯示設備配件。
HoloLens是微軟以前瞻性思維所打造產品,本身除整合虛擬實境顯示、對應手勢及語音等人機操作模式,以及提供TB等級的即時運算量等特性,同時也呈現Windows 10於穿戴裝置應用的可能性,微軟也表示目前此項產品仍為前期發展階段,未來將持續拓展實際可應用領域。

在HoloLens設計部分,微軟將採取獨立運作、沒有任何連接線,甚至在處理器、繪圖器等運作元件外,更額外加入負責全息影像處理的HPU (Holographic Processing Unit),以及Windows Holographic技術,同時支援Universal Windows Apps。

Google Glass 原理及功能定義越來越清楚 - 但要停產?

Google表示,現在開發人員可將其他服務的語音命令添加到主要選單中,最先示範的兩項語音命令是張貼更新(Post an update)與作筆記(Take a note),前者目前支援社交程式Path,後者支援Evernote。Glass用戶必須先說出「Ok,Glass」以啟用語音命令功能,張貼更新與作筆記只是個開始,未來Google將會允許Glass用戶利用語音來執行各種服務。

此次Google也改善了Glass平台上的重點服務─Google Now,Google Now標榜能在對的時間與地點展示對的資訊,有各種類別的卡片,諸如交通、天氣與運動賽事的即時成績等,現在則新增電影、活動與緊急通知卡片,可顯示附近電影院正上映的影片,亦可提醒使用者音樂會或有訂位的晚餐時間快到了,或是在天災發生時進行緊急通知。

至於新的影音播放工具則提供暫停、播放與快轉等新功能。

Google期望Google Glass可於今年正式上市,目前估計約有數千人正使用測試版的Google Glass Explorer,測試版售價為1500美元。

Google Glass 團隊宣稱「畢業了」下週開始停產停售?

下週一開始,也就是 1/19 進行 2 年的 Google Glass Explorer 計劃將停止,Google Glass 將停售停產不再接受訂單。

失敗了?智慧眼鏡沒前途?

好啦,都不是。在 Google Glass 的官方 Google+ 帳號上,他們宣布「我們從『Google 的 X 實驗室』畢業了!」並感謝每個幫助他們嘗試、試驗 Google Glass 功能的先驅者們。

沒錯,Google Glass 將停止銷售給一般消費者(合作業者還是會繼續),並且從 Google X 實驗室旗下搬出,移交智慧家庭部門 Nest Labs 執行長 Tony Fadell 督導 。

Google 也澄清雖然目前看來一切暫停,但他們並沒有要放棄 Google Glass,反之他們還要致力於推出能讓一般消費者適用的 Google Glass;只是時間還不確定。而 Google 的這項舉動被多數媒體認為代表著「Google Glass 長大了」下一步就是商業化。

不管如何,很期待下次再出現的 Google Glass 會長怎樣?是不是克服了一直以來大家詬病的隱私、安全、設計、成本等問題?

分析
  • Google Glass 到目前仍是過於耗電,售價與功能不成吸引力因子,相較之下微軟 HoloLens 功能比Google Glass 更實用,而且,只要加上一個相機就可包含Google Glass 功能,再來就看微軟 HoloLens售價。
  • 微軟 HoloLens 也將面臨耗電問題,因為所需要之耗電全像處理單元 HPU ( Holographic Processing Unit)將比 Google Glass 更耗電。

2015年1月7日 星期三

從機器學習到人類社會結構的革命 - 20年內巨型雲端機器人將改變人類社會 ( From machine learning to human social structures Revolution - the giant cloud robot will change human society within 20 years )

It used to be that if you wanted to get a computer to do something new, you would have to program it. Now, programming, for those of you here that haven't done it yourself, requires laying out in excruciating detail every single step that you want the computer to do in order to achieve your goal. Now, if you want to do something that you don't know how to do yourself, then this is going to be a great challenge.

So this was the challenge faced by this man, Arthur Samuel. In 1956, he wanted to get this computer to be able to beat him at checkers. How can you write a program, lay out in excruciating detail, how to be better than you at checkers? So he came up with an idea: he had the computer play against itself thousands of times and learn how to play checkers. And indeed it worked, and in fact, by 1962, this computer had beaten the Connecticut state champion.

So Arthur Samuel was the father of machine learning, and I have a great debt to him, because I am a machine learning practitioner. I was the president of Kaggle, a community of over 200,000 machine learning practictioners. Kaggle puts up competitions to try and get them to solve previously unsolved problems, and it's been successful hundreds of times. So from this vantage point, I was able to find out a lot about what machine learning can do in the past, can do today, and what it could do in the future. Perhaps the first big success of machine learning commercially was Google. Google showed that it is possible to find information by using a computer algorithm, and this algorithm is based on machine learning. Since that time, there have been many commercial successes of machine learning. Companies like Amazon and Netflix use machine learning to suggest products that you might like to buy, movies that you might like to watch. Sometimes, it's almost creepy. Companies like LinkedIn and Facebook sometimes will tell you about who your friends might be and you have no idea how it did it, and this is because it's using the power of machine learning. These are algorithms that have learned how to do this from data rather than being programmed by hand.

This is also how IBM was successful in getting Watson to beat the two world champions at "Jeopardy," answering incredibly subtle and complex questions like this one. ["The ancient 'Lion of Nimrud' went missing from this city's national museum in 2003 (along with a lot of other stuff)"] This is also why we are now able to see the first self-driving cars. If you want to be able to tell the difference between, say, a tree and a pedestrian, well, that's pretty important. We don't know how to write those programs by hand, but with machine learning, this is now possible. And in fact, this car has driven over a million miles without any accidents on regular roads.

So we now know that computers can learn, and computers can learn to do things that we actually sometimes don't know how to do ourselves, or maybe can do them better than us. One of the most amazing examples I've seen of machine learning happened on a project that I ran at Kaggle where a team run by a guy called Geoffrey Hinton from the University of Toronto won a competition for automatic drug discovery. Now, what was extraordinary here is not just that they beat all of the algorithms developed by Merck or the international academic community, but nobody on the team had any background in chemistry or biology or life sciences, and they did it in two weeks. How did they do this? They used an extraordinary algorithm called deep learning. So important was this that in fact the success was covered in The New York Times in a front page article a few weeks later. This is Geoffrey Hinton here on the left-hand side. Deep learning is an algorithm inspired by how the human brain works, and as a result it's an algorithm which has no theoretical limitations on what it can do. The more data you give it and the more computation time you give it, the better it gets.

The New York Times also showed in this article another extraordinary result of deep learning which I'm going to show you now. It shows that computers can listen and understand.

(Video) Richard Rashid: Now, the last step that I want to be able to take in this process is to actually speak to you in Chinese. Now the key thing there is, we've been able to take a large amount of information from many Chinese speakers and produce a text-to-speech system that takes Chinese text and converts it into Chinese language, and then we've taken an hour or so of my own voice and we've used that to modulate the standard text-to-speech system so that it would sound like me. Again, the result's not perfect. There are in fact quite a few errors. (In Chinese) (Applause) There's much work to be done in this area. (In Chinese)

Jeremy Howard: Well, that was at a machine learning conference in China. It's not often, actually, at academic conferences that you do hear spontaneous applause, although of course sometimes at TEDx conferences, feel free. Everything you saw there was happening with deep learning. (Applause) Thank you. The transcription in English was deep learning. The translation to Chinese and the text in the top right, deep learning, and the construction of the voice was deep learning as well.

So deep learning is this extraordinary thing. It's a single algorithm that can seem to do almost anything, and I discovered that a year earlier, it had also learned to see. In this obscure competition from Germany called the German Traffic Sign Recognition Benchmark, deep learning had learned to recognize traffic signs like this one. Not only could it recognize the traffic signs better than any other algorithm, the leaderboard actually showed it was better than people, about twice as good as people. So by 2011, we had the first example of computers that can see better than people. Since that time, a lot has happened. In 2012, Google announced that they had a deep learning algorithm watch YouTube videos and crunched the data on 16,000 computers for a month, and the computer independently learned about concepts such as people and cats just by watching the videos. This is much like the way that humans learn. Humans don't learn by being told what they see, but by learning for themselves what these things are. Also in 2012, Geoffrey Hinton, who we saw earlier, won the very popular ImageNet competition, looking to try to figure out from one and a half million images what they're pictures of. As of 2014, we're now down to a six percent error rate in image recognition. This is better than people, again.

So machines really are doing an extraordinarily good job of this, and it is now being used in industry. For example, Google announced last year that they had mapped every single location in France in two hours, and the way they did it was that they fed street view images into a deep learning algorithm to recognize and read street numbers. Imagine how long it would have taken before: dozens of people, many years. This is also happening in China. Baidu is kind of the Chinese Google, I guess, and what you see here in the top left is an example of a picture that I uploaded to Baidu's deep learning system, and underneath you can see that the system has understood what that picture is and found similar images. The similar images actually have similar backgrounds, similar directions of the faces, even some with their tongue out. This is not clearly looking at the text of a web page. All I uploaded was an image. So we now have computers which really understand what they see and can therefore search databases of hundreds of millions of images in real time.

So what does it mean now that computers can see? Well, it's not just that computers can see. In fact, deep learning has done more than that. Complex, nuanced sentences like this one are now understandable with deep learning algorithms. As you can see here, this Stanford-based system showing the red dot at the top has figured out that this sentence is expressing negative sentiment. Deep learning now in fact is near human performance at understanding what sentences are about and what it is saying about those things. Also, deep learning has been used to read Chinese, again at about native Chinese speaker level. This algorithm developed out of Switzerland by people, none of whom speak or understand any Chinese. As I say, using deep learning is about the best system in the world for this, even compared to native human understanding.

This is a system that we put together at my company which shows putting all this stuff together. These are pictures which have no text attached, and as I'm typing in here sentences, in real time it's understanding these pictures and figuring out what they're about and finding pictures that are similar to the text that I'm writing. So you can see, it's actually understanding my sentences and actually understanding these pictures. I know that you've seen something like this on Google, where you can type in things and it will show you pictures, but actually what it's doing is it's searching the webpage for the text. This is very different from actually understanding the images. This is something that computers have only been able to do for the first time in the last few months.

So we can see now that computers can not only see but they can also read, and, of course, we've shown that they can understand what they hear. Perhaps not surprising now that I'm going to tell you they can write. Here is some text that I generated using a deep learning algorithm yesterday. And here is some text that an algorithm out of Stanford generated. Each of these sentences was generated by a deep learning algorithm to describe each of those pictures. This algorithm before has never seen a man in a black shirt playing a guitar. It's seen a man before, it's seen black before, it's seen a guitar before, but it has independently generated this novel description of this picture. We're still not quite at human performance here, but we're close. In tests, humans prefer the computer-generated caption one out of four times. Now this system is now only two weeks old, so probably within the next year, the computer algorithm will be well past human performance at the rate things are going. So computers can also write.

So we put all this together and it leads to very exciting opportunities. For example, in medicine, a team in Boston announced that they had discovered dozens of new clinically relevant features of tumors which help doctors make a prognosis of a cancer. Very similarly, in Stanford, a group there announced that, looking at tissues under magnification, they've developed a machine learning-based system which in fact is better than human pathologists at predicting survival rates for cancer sufferers. In both of these cases, not only were the predictions more accurate, but they generated new insightful science. In the radiology case, they were new clinical indicators that humans can understand. In this pathology case, the computer system actually discovered that the cells around the cancer are as important as the cancer cells themselves in making a diagnosis. This is the opposite of what pathologists had been taught for decades. In each of those two cases, they were systems developed by a combination of medical experts and machine learning experts, but as of last year, we're now beyond that too. This is an example of identifying cancerous areas of human tissue under a microscope. The system being shown here can identify those areas more accurately, or about as accurately, as human pathologists, but was built entirely with deep learning using no medical expertise by people who have no background in the field. Similarly, here, this neuron segmentation. We can now segment neurons about as accurately as humans can, but this system was developed with deep learning using people with no previous background in medicine.

So myself, as somebody with no previous background in medicine, I seem to be entirely well qualified to start a new medical company, which I did. I was kind of terrified of doing it, but the theory seemed to suggest that it ought to be possible to do very useful medicine using just these data analytic techniques. And thankfully, the feedback has been fantastic, not just from the media but from the medical community, who have been very supportive. The theory is that we can take the middle part of the medical process and turn that into data analysis as much as possible, leaving doctors to do what they're best at. I want to give you an example. It now takes us about 15 minutes to generate a new medical diagnostic test and I'll show you that in real time now, but I've compressed it down to three minutes by cutting some pieces out. Rather than showing you creating a medical diagnostic test, I'm going to show you a diagnostic test of car images, because that's something we can all understand.

So here we're starting with about 1.5 million car images, and I want to create something that can split them into the angle of the photo that's being taken. So these images are entirely unlabeled, so I have to start from scratch. With our deep learning algorithm, it can automatically identify areas of structure in these images. So the nice thing is that the human and the computer can now work together. So the human, as you can see here, is telling the computer about areas of interest which it wants the computer then to try and use to improve its algorithm. Now, these deep learning systems actually are in 16,000-dimensional space, so you can see here the computer rotating this through that space, trying to find new areas of structure. And when it does so successfully, the human who is driving it can then point out the areas that are interesting. So here, the computer has successfully found areas, for example, angles. So as we go through this process, we're gradually telling the computer more and more about the kinds of structures we're looking for. You can imagine in a diagnostic test this would be a pathologist identifying areas of pathosis, for example, or a radiologist indicating potentially troublesome nodules. And sometimes it can be difficult for the algorithm. In this case, it got kind of confused. The fronts and the backs of the cars are all mixed up. So here we have to be a bit more careful, manually selecting these fronts as opposed to the backs, then telling the computer that this is a type of group that we're interested in.

So we do that for a while, we skip over a little bit, and then we train the machine learning algorithm based on these couple of hundred things, and we hope that it's gotten a lot better. You can see, it's now started to fade some of these pictures out, showing us that it already is recognizing how to understand some of these itself. We can then use this concept of similar images, and using similar images, you can now see, the computer at this point is able to entirely find just the fronts of cars. So at this point, the human can tell the computer, okay, yes, you've done a good job of that.

Sometimes, of course, even at this point it's still difficult to separate out groups. In this case, even after we let the computer try to rotate this for a while, we still find that the left sides and the right sides pictures are all mixed up together. So we can again give the computer some hints, and we say, okay, try and find a projection that separates out the left sides and the right sides as much as possible using this deep learning algorithm. And giving it that hint -- ah, okay, it's been successful. It's managed to find a way of thinking about these objects that's separated out these together.

So you get the idea here. This is a case not where the human is being replaced by a computer, but where they're working together. What we're doing here is we're replacing something that used to take a team of five or six people about seven years and replacing it with something that takes 15 minutes for one person acting alone.

So this process takes about four or five iterations. You can see we now have 62 percent of our 1.5 million images classified correctly. And at this point, we can start to quite quickly grab whole big sections, check through them to make sure that there's no mistakes. Where there are mistakes, we can let the computer know about them. And using this kind of process for each of the different groups, we are now up to an 80 percent success rate in classifying the 1.5 million images. And at this point, it's just a case of finding the small number that aren't classified correctly, and trying to understand why. And using that approach, by 15 minutes we get to 97 percent classification rates.
So this kind of technique could allow us to fix a major problem, which is that there's a lack of medical expertise in the world. The World Economic Forum says that there's between a 10x and a 20x shortage of physicians in the developing world, and it would take about 300 years to train enough people to fix that problem. So imagine if we can help enhance their efficiency using these deep learning approaches?

So I'm very excited about the opportunities. I'm also concerned about the problems. The problem here is that every area in blue on this map is somewhere where services are over 80 percent of employment. What are services? These are services. These are also the exact things that computers have just learned how to do. So 80 percent of the world's employment in the developed world is stuff that computers have just learned how to do. What does that mean? Well, it'll be fine. They'll be replaced by other jobs. For example, there will be more jobs for data scientists. Well, not really. It doesn't take data scientists very long to build these things. For example, these four algorithms were all built by the same guy. So if you think, oh, it's all happened before, we've seen the results in the past of when new things come along and they get replaced by new jobs, what are these new jobs going to be? It's very hard for us to estimate this, because human performance grows at this gradual rate, but we now have a system, deep learning, that we know actually grows in capability exponentially. And we're here. So currently, we see the things around us and we say, "Oh, computers are still pretty dumb." Right? But in five years' time, computers will be off this chart. So we need to be starting to think about this capability right now.

We have seen this once before, of course. In the Industrial Revolution, we saw a step change in capability thanks to engines. The thing is, though, that after a while, things flattened out. There was social disruption, but once engines were used to generate power in all the situations, things really settled down. The Machine Learning Revolution is going to be very different from the Industrial Revolution, because the Machine Learning Revolution, it never settles down. The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before, so your previous understanding of what's possible is different.

This is already impacting us. In the last 25 years, as capital productivity has increased, labor productivity has been flat, in fact even a little bit down.

So I want us to start having this discussion now. I know that when I often tell people about this situation, people can be quite dismissive. Well, computers can't really think, they don't emote, they don't understand poetry, we don't really understand how they work. So what? Computers right now can do the things that humans spend most of their time being paid to do, so now's the time to start thinking about how we're going to adjust our social structures and economic structures to be aware of this new reality. Thank you!