顯示具有 機器人 標籤的文章。 顯示所有文章
顯示具有 機器人 標籤的文章。 顯示所有文章

2016年10月30日 星期日

機器人時代來了,所有物聯網裝置都將有人工智慧的核心 - 那些事件發生對人類影響是巨大的 ( The era of robots is coming, all devices will have the core of artificial intelligence )

機器人來了 這7種工作他們超拿手

Google involve in Robot industrial will make a big impact to human
機器人來了!功能設計更上一層樓,在一般家庭、辦公商務、醫療照護、陪伴療癒、教育娛樂、金融理財、賣場等各種生活場合,機器人提供多樣化服務,與人類更自然互動。

隨著人工智慧、環境與動作感測辨識、以及情感認知互動等技術大幅提升,機器人已更上一層樓,可以具有人型外觀、能人對話互動、甚至可從人類互動中學習增長智慧。

如今在一般家庭辦公商務醫療照護陪伴療癒教育娛樂金融理財、賣場等生活場合,我們開始可以看到機器人與人類互動、提供多樣化服務的身影。

例如話題持續延燒的日本軟銀( SoftBank )服務型機器人Pepper,預計最快 9 月下旬,就可以在台灣各大電信門市、銀行、賣場等場合,看到 Pepper 現身服務。

機器人也可以成為家庭生活的好幫手。例如日本豐田(TOYOTA)開發的 Human Support Robot 機器人,使用者可透過平板電腦觸控定點遙控,協助撿拾或吸取地上物品。此外,清潔機器人可以透過紅外線和光學感測,幫忙家庭掃地吸塵、清理地板。

當然,機器人也可以搖身一成為大廚,料理出一道道拿手好菜。

國外媒體報導,Moley Robotics 規劃最快 2017 年推出機器人廚房解決方案,家庭成員可以透過廚房的觸控螢幕,或是智慧型手機和平板電腦,指示廚房內的機器人手臂,操作料理的過程。

隨著人口高齡化和少子化趨勢,缺工問題接踵而來,銀行、醫院、百貨、餐飲業、汽車運輸業等服務業,正陸續引進機器人來取代人力,因此創造出新興商業模式。

例如日本本田(HONDA)的 All-new ASIMO 人型機器人,躍進到具備自主能力(autonomous)的新階段,可在辦公場合扮演商務接待的角色,也能和一般上班族自然互動。

而在金融科技(FinTech)浪潮下,數位化機器人可能逐步取代銀行分行的從業人員,化身數位理專,直接面對臨櫃一般民眾,提供理財資訊與建議,成為民眾金融理財的好幫手。

同時,機器人軟體也逐步滲透進入汽車運輸領域。例如本田和軟銀在 7 月下旬宣布合作,本田將軟銀 Pepper 機器人內建雲端人工智慧軟體,應用在汽車領域。未來汽車將變身成為超級電腦或是機器人,讓車輛在行駛過程中,可與駕駛和乘客對話。

在醫療照護層面,療癒機器人可以幫助醫護人員、照顧親戚和病患之間的互動,例如日本 RIKEN-TRI Collaboration 推出看護機器人 RIBA,可以抱起和放置病患,協助病患上下輪椅。

美國研究團隊也開發出智慧軟組織自動機器人(Smart Tissue Autonomous Robot),可進行比擬類外科醫生縫合動物器官軟組織的手術作業

機器人不只可以幫人類工作,還可以陪你深情伴唱

例如日本產業技術總合研究所(AIST)推出的擬真機器人 HRP-4C ,可以按照人類的頭髮、臉部五官、頸部、手足設計,顏面表情更可隨著外部環境條件變化。HRP-4C 更可以結合語音合成技術,模仿人類聲音唱出歌曲,肢體也可隨著歌曲同步變化生動。

被科技威脅的未來:人類沒有工作的那一天 - 值得閱讀深思

  一天,亨利福特帶汽車工人工會領導人參觀最新的自動化工廠,正為了自動化生產線不需要工人,洋洋得意。工人工會領導人回了亨利福特一句話,「你打算如何讓這些機器人買你的車呢?」

這段對話可能是虛擬的,但是卻預示了科技長期快速發展下,工資、生產力、 消費力架構的社會經濟結構,已經被推到了巨變的引爆點。

過去,機器是提高勞工生產力的工具,今天,機器就是勞工,取代的不只是低技能的工作。

  電腦成為「有思考能力」的機器,能決策 、會學習、甚至有好奇心,加上大數據與雲端運算,需要依賴電腦資訊工作的白領階層終將被機器、軟體所吞噬。這些專業化、例行性與可預測的工作,包括律師、藥劑師、醫師、分析師、IT技術人員、甚至公司主管等都將一一被取代。

  機器佔據工作機會,但機器人只生產、不消費,將使薪資成長停滯、所得更集中在少數富有者身上,大量的消費者最終因缺乏足夠的所得與購買力,使得社會失去消費動能,終將威脅到經濟成長。

不同於一般科技人或趨勢家,作者馬丁‧福特(Martin Ford)是成功創業家,長期觀察軟體發展、從獨特的科技對社會的影響角度切入,引證大量資料,率先詳盡研究機器智能和機器人科技可能帶來的衝擊,分析出因科技而生的七大致命的經濟趨勢。

  不管我們願不願意,機器接管工作的腳步只會加快,熟悉的生活都將終結,我們勢必要重新思考經濟運作,才能因應科技強大的顛覆破壞力。

    當機器人已經不是想像中的笨重機器手臂,現在的機器人會共享知識、靈巧度可與人類匹敵、3D視覺也難不倒它們。機器大軍即將突破人類就業安全網,農業、製造業、服務業、零售業等提供最多就業機會的產業為了生存而紛紛轉型自動化。我們創造工作的速度,遠跟不上工作消失的速度。

  機器人將成為不吃飯、不逛街、不消費的超級勞動力,而失去工作的人們將成為失能的消費者;當中產階級不再握有消費能力,經濟成長將無限期停滯,最終走向經濟學家無法從過往經驗預見的全面經濟崩壞。

  大數據、自然語言科技、機器自學、基因編程演算法,讓人工智慧走出科幻電影,成為現代生活真實的一部分,人類引以為傲的技能與專業,即將面臨嚴峻挑戰。


Google 20 日宣布收購提供聊天機器人開發工具的新創 API.AI,透過 API.AI,開發者可輕鬆打造同時支援文字和語音辨識的對話式介面(Conversational UI)。

API.AI 可支援 15 種語言

API.AI 的 API 透過語音辨識、意圖辨識和上下文語境理解等技術,讓電腦理解人類語言並轉換為行動,協助開發者打造類似 Siri 的對話式智慧助理,可用於聊天機器人、App、智慧家電等。

Google 指出,目前已有超過 6 萬名開發者使用 API.AI,其中包含 Slack、Facebook Messenger 和 Kik 等通訊軟體。API.AI 目前可支援英文、中文、法文、德文和西班牙文等 15 種語言和方言,而 API.AI 自家的聊天機器人助理 Assistant 已擁有超過 2 千萬名使用者。

Facebook 和 Google 的語音辨識大戰

在此之前,Google 已有豐富的自然語言處理相關經驗,並於今年開放自家語音辨識工具 Google Natural Language API 供開發者免費使用。

此外,Google 也推出類似亞馬遜 Alexa、蘋果 Siri 的智慧語音助理 Google Assistant,並整合進旗下即時通訊軟體 Allo,可提示回覆訊息、協助訂餐廳等功能,而收購 API.AI 後,可讓 Google 更容易轉換人類語言、理解這些文字背後的意圖。

除了 Google,Facebook 同樣也在其通訊軟體加強語音辨識功能,除於 2015 年收購語音辨識新創 WIT.AI,也測試將 Facebook Messenger 的語音內容轉換為文字,讓接收者可直接閱讀。Facebook 旗下 WhatsApp 則是在 iOS 10 中與 Siri 整合,可以直接用語音指示 WhatsApp 撥電話或傳訊息。

語音辨識 + 藍牙耳機 = 語音時代來臨

有了語音辨識技術,未來智慧型手機可以辦到的事,可能都不再需要打開手機便可完成。

知名科技趨勢分析師瑪麗米克(Mary Meeker)在報告中指出,語音時代即將來臨。她說明,「語音是最有效率的運算輸入形式」,此外,語音介面也可以更容易預測我們談話背後的意圖,讓我們不用再從首頁開始瀏覽,而是直接進入想要的功能。

從蘋果推出無線耳機 AirPods 也可看見此趨勢,AirPods 內建免持麥克風,讓使用者可隨時輕鬆召喚 Siri。可想見,未來和智慧助理的溝通管道,可能將逐漸從手機轉移至耳機,或許我們就不用再花這麼多時間盯著手機螢幕了!


根據路透社的報導,日本最大的汽車製造商豐田 (TOYOTA) 宣佈,他們發明創造出了一種機器人孩童,這種機器人不但能夠做路途嚮導,且能夠安慰和寬解那些沒有孩子的女性。據了解,豐田公司計劃將在下一年開始銷售這種「機械寶寶」。

路透社的報導指出,這種長約 10 公分的機器人名為 Kirobo Mini ,內部裝有攝影鏡頭,麥克風和藍牙系統,同時可以連接智慧型手機。豐田公司的研發部門表示,他能夠辨識人的情緒,並能夠做出反應。 例如;當擁有者跟 Kirobo Mini 一起坐到車上,在過程中一旦車輛緊急刹車, Kirobo Mini 的機器人能夠喊出「噢」的反應,並開始談話,目的是提醒可能疲憊的駕駛,在開車時不會睡著。而在車輛平安到目的地之後,Kirobo Mini 還會誇獎自己的主人。

除了在車上陪伴駕駛者之外,Kirobo Mini 這種機器人最非比尋常的一點,就是在於它能該安慰那些沒有子女的日本婦女。根據豐田公司表示,這種機器人能夠緩解日本因低出生率,而對婦女所造成的心理性負面後果。豐田公司的總研發師指出,Kirobo Mini 的底盤很像是一般兒童的腳,能夠表現搖晃動作,模仿那些平衡能力還未發育完全而必須要坐著的孩子們

過去,豐田也曾經研發一系列舊版本的 Kirobo 機器人,這些舊版本的機器人曾經在國際太空站中工作了 18 個月之久,為的就是陪伴太空站內的太空人,並且提供心理支持的作用。至於,新款的 Kirobo Mini 未來在日本的建議售價將為 400 美元(約為新台幣12,550元)。

餐飲服務、製造業逾五成工作已可交給機器人代勞

華爾街日報 12 日報導,根據麥肯錫公司(McKinsey & Co.)發布的報告,餐飲服務業工作是最有可能被科技所取代的,73% 的工作內容可以利用現有的自動化科技來完成。這份報告顯示,包括包裝物品、生產線焊接等可預測、重複性任務也可透過現有機器人技術加以取代。這些可自動化任務約佔美國總工時的五分之一,若就特定製造業而言,佔比將會更高。麥肯錫預估高達 59% 的製造業活動都可由機器人代勞,90% 的焊接、切割工作都可自動化。

當然,可以自動化並不意味著機器人大軍馬上就會取代人類、特別是那些勞力供給充裕的產業。這份報告發現,必須利用專業技術來做出決策、規劃以及創意的工作較難被機器人取代,醫療業從業人員目前也不用擔心會被搶走飯碗。

麥肯錫報告顯示,零售、運輸業 51% 的工作時數可被自動化,金融與保險業為 43%,藝術、娛樂與休閒為 41%。另一方面,教育服務業可被自動化的比率僅有 27%。

麥肯錫指出,如果有一天機器人的自然語言理解能力跟一般人類一樣,金融與保險業工時可自動化比率將從 43% 跳升至 66%。

趨勢分析
  • 從大數據、語音辨識技術、影像辨識技術、自然語言科技、機器自學、基因編程演算法、機器手臂、機器人走路街道辨識,那些事件發生對人類影響是巨大的,再度說出機器人時代來了,人類無法抵擋機器人時代的來臨。
  • 若 Google、Facebook、Apple、Intel 將機器人科技整合了電腦,在合理價位下一個是機器人又是電腦的科技將普及人類社會,讓企業效率更高,讓電腦與娛樂、教學、搜尋及建議之機器人電腦產生下一代革命。
  • 機器人及人工智慧趨勢將超越穿戴裝置成為普及之科技產品。



2015年2月19日 星期四

Morgan Stanley 認為自動駕駛互聯網汽車將影響未來30年的美好世界 ( Morgan Stanley think self-driving connected car will create new furture of human life )

Morgan Stanley Thinks Self-Driving Cars Will Bring Utopia by 2026

The term "techno-utopian" is helpful for referring to people who believe technology alone can solve the world's problems. The term is rarely used literally. But if the financial services company Morgan Stanley is to be believed, driverless cars will literally bring about a utopian society in just over a decade.

Business Insider has this slide from Morgan Stanley about the future of Tesla Motors. The chart looks at projections for autonomous car capabilities and by 2026 it predicts a "utopian society."

One can't help but wonder if "100% autonomous, utopian society" was maybe meant as a harmless chucklegoof that didn't get edited out before publication. Joke or not, there are indeed plenty of people who are putting their eggs in the self-driving car basket.

Gartner:5年後將有2.5億輛的連網汽車具備自動駕駛能力

市場研究機構Gartner預測,到了2020年全球的道路上將會出現2.5億台連網汽車,而這些汽車都將具備全新的車載服務與自動駕駛能力,使其成為物聯網(Internet of Things,IoT)的主要元素。

Gartner研究總監James Hines表示,連網汽車已成為現實,車載無線上網能力則迅速從高階汽車與高級品牌擴散至中階車款,車載數位內容的建立及使用將推動更精密的資訊娛樂系統,替應用程式處理器、圖像加速器、顯示器,與人機介面技術帶來機會。在此同時,行動與車輛使用的新概念將讓特別是城市中的汽車所有權走向新的商業模式或其他替代方案。

根據Gartner的預測,全球物聯網裝置數量將從2015年的49億成長到2020年的250億。2020年的全球道路上每5台汽車就會有一台具備無線上網能力,屆時的連網汽車總數將超過2.5億台,這些連網汽車可望具備遠端訊息處理、自動駕駛、車載資訊娛樂系統,與各種行動服務。

對於物聯網的未來,Gartner認為直到2018年市場上都不會出現可主宰物聯網生態體系的平台。Gartner 另一名研究總監 Alfonso Velosa指出,有關物聯網的標準或生態系統都仍在發展中,且許多目前運作中的物聯網專案供應商或生態體系可能會失敗,企業資訊長應該要確保其主要的系統可滿足未來需求的專案策略,對於一個涉及數十年基礎設施的專案來說這格外重要,以閘道為基礎的架構將是物聯網專案能否經得起時間考驗的關鍵作法。

即使企業都忙著建立自己的 IoT 生態體系,但目前並無一致的商業或技術模式,標準還在建立中,且多數的IoT專案使用客製化元件,此一形勢未來將因缺乏主要的技術服務供應商而更趨複雜。

Bran Ferren: To create for the ages, let's combine art and engineering : Self-Driving Cars

早安。 當我還小時, 我經歷一個改變我人生的經驗, 也是為何我會站在這裡的原因。 那個經驗 深深地影響我如何看待 藝術,設計和工程。

我很幸運地在世界最偉大的城市之一, 在一個充滿愛和有天賦藝術家的家庭中 長大成人。 我的父親,約翰·費倫,在我 15 歲時過世 跟我的母親,瑞,一樣, 是一位兼具熱情和專業的藝術家。 他是約紐畫派 (New York School) 抽象表現主義畫家, 融合他的當代思維, 創造了美國現代藝術, 引領美國的時代精神 進入到 20 世紀的現代主義。 這不是很偉大嗎?數千年以來, 人類藝術一直大部分都是寫實主義, 而現代藝術,相對來說, 大約只有 15 分鐘的生命 而仍然在漫延中。 就像其它重要的創新一樣, 這些全新的想法不需要新的技術, 只要重新思索和有嘗試的意願, 加上能面對幾乎一面倒的評判 和拒絕的韌性。 在我們家裡,到處都是藝術, 就像氧氣一樣, 圍繞著我們,是生命的必需品。 當我看著我父親作畫, 他告訴我 藝術不是裝飾用, 而是想法交流的另一種形式, 事實上,它是連結世界的 知識和眼光的橋梁。

在這個充滿藝術氣息的環境, 你或許認為我會不得已的 加入這個家族的事業。 但我並沒有。 我跟其它的小孩一樣, 從基因中就被設定好 是會讓父每抓狂的。 我對成為藝術家一點興趣也沒有, 更不用說畫家了。 我喜愛的是電子和機械, 拆解,組裝成新的, 讓它們能運作。 幸運地,我的家族裡有搞工程的, 和我父母一樣, 他們是我的人生導師。 他們都有一個共通點, 就是非常非常努力地工作。 我的爺爺擁在布魯克林有一座 餐具櫃的金屬工廠。 在週末,我們會一起去特克蘭街 (Cortlandt Street), 那是約紐的電子商街, 我們在成堆的 電子零件中挖寶, 用幾塊錢把一些寶買回家, 像是諾頓投彈瞄準器, 和 IBM 第一台真空管電腦的零件。 我覺得這些東西有用又有趣, 我學會了工程學和其原理, 不是在學校, 而且在拆解和研究 這些絕妙的複雜設備。 我每天花上數小時做這些事, 還好我沒有觸電身亡, 生命是美好的。

但是令人難過地,每個暑假, 這些機械離我而去, 因為我父母帶我出國旅遊, 體驗歷史,藝術和設計。 我們在歐洲和中東參觀 博物館和歷史建築。 但為了鼓勵我在科學 和工藝上的興趣, 他們會讓我去像是 倫敦科學博物館之類的地方。 我在那裡可以待上一整天, 研究科學工藝的歷史。

在我大約九歲的時候, 我們去了羅馬。 在炎熱夏天的某一天, 我們參觀了一棟像鼓一樣的建築, 外觀上一點也不起眼。 我爸說這是萬神廟 (Pantheon), 眾神的廟宇。 就像我說的,從外面看來一點也不特別, 但當我們走進去時, 我馬上被三件事情震驚到: 第一,跟外頭的熱度相比, 裡面是透心涼快。 裡頭非常的暗,唯一的光源, 來自天花板的一個大洞。 父親解釋,那不是一個大洞, 它叫作「眼睛」 (oculus), 能見到天堂的眼睛。 這個地方有些什麼特別之處, 我不知道是什麼,就是覺得特別。 當我們走到房屋的中央時, 我透過「眼睛」向上觀看天空, 這是我到過的第一個教堂, 在上帝與人間, 提供一個無遮蔽的視野。 但我很好奇,下雨時會怎麼樣? 父親或許稱它是「眼睛」, 但它實際上是一個天花板的大洞。 我低頭看到地板排水孔, 鑲在石頭地板中。 當我更加適應裡頭的黑暗, 我能更加看清楚地板和 周圍牆壁的細節。 沒啥特別的,跟我們在羅馬其它地方 看到的雕像一樣。 事實上,看起來像是亞壁古道 (Appian Way)。 大理石雕刻的商人 拿著他的目錄給哈德良看。 (註:哈德良為羅馬帝國五賢帝之一) 哈德良說「我們全買了。」 (笑聲)

但天花板令人驚奇, 看起來像是巴克敏斯特·富勒的球形屋頂。 (註:巴克敏斯特·富勒是美國建築家) 我以前見過這種的, 巴克是我父親的朋友。 現代的,高技科,令人印深刻, 巨大的,142 英呎,沒有任何樑柱, 不是巧合的,剛好就這麼高。 我喜歡這個地方。 它真的很美,而且跟其它地方不一樣。 我就問我父親 "這是何時建造的?” 他回答 "大約兩千年前左右。” 我說 "不,我指的是屋頂。” 看,我以為這是一個現代的屋頂, 原有的屋頂因被戰火摧毀, 而再另外蓋一個。 我父親說 "這是原本的屋頂。”

從此刻我的人生就改變了, 就像昨天一樣記得很清楚。 第一次,我認知到兩千年前的人類 是如此聰明的。(笑聲) 這事在我腦中揮之不去。 我的意思是,在埃及吉薩的金字塔 前一年我們去看過了, 當然是很壯觀,完美的設計, 但只要我有夠多的預算, 兩萬到四萬個工人,花十到二十年, 在各地把石塊切割運過來, 我也可以造一座金字塔。 但只是靠蠻力, 無論是兩千年前或是今天, 你無法建一個萬神廟的穹頂。 而且它是現在最大的 未經強化過的混泥土穹頂。 建造萬神廟需要一些奇蹟。 我所謂的奇蹟,是指 在技術上近乎是不可能的, 高風險的,就算是今天, 也不見得可以完成的, 當然也不可能是由你完成的。

舉幾個萬神廟的奇蹟: 要建造這樣一個的架構, 必須要有非常強力的混凝土, 而且要控制好重量, 隨著建造的高度, 要調整混凝土的密度。 達到需要的強度和輕量,穹頂的結構使用了 五圈的鑲板, 每一個的尺寸逐漸縮小, 依設計來看, 分散掉絕大部分的壓力。 穹頂下異常的涼爽, 因為其內部巨大的空間, 上升氣流可經由 ”眼睛” 排出 的自然對流, 和外部空氣吹過圓頂的 文丘里效應 (Venturi effect)。 我第一次發現 光是物質的。 經 "眼睛" 進來的一道光束, 是如此美麗及可觸摸的, 我第一次發現 光線也可以被設計的。 更甚,這裡所有形式的設計, 視覺設計, 全都脫離不了光線, 因為沒有了光,你啥也看不到。 我也發現我不是第一位 認為這個地方是如此的特別。 它撐過重力、野蠻人、掠奪者、建造者, 還有時間的蹂躪, 成為我認為是 史上最長壽的建築。

因為這次的造訪, 我開始了解到, 相對於我在學校所學到的, 藝術和設計的世界 實際上,並不會不相容於 科學工藝的世界。 我發現當這兩個結合時, 你可以創造出在單一領域中 無法達到的效果。 但在學校裡,除了少數例外, 它們是被區分開來的, 現在也還是這樣。 我的老師們都告訴我要認真 專注在其中一個領域, 但是,力促我只精通一個領域, 讓我對憧景那像博學的人們 像米開朗基羅、達文西 本傑明·富蘭克林, 他們都不只是精通單一領域。 這讓我更加嚮往 同時身處在兩種領域。

所以像萬神廟這種史無前例的創新和精妙 是如何達成的呢? 有些人,像是哈德良 (Hadrian), 需要有傑出的創新願景, 他們也需要說故事和領導的才能, 來實現這個願景, 還有在科學工藝上的專精, 專業能力和技術, 來延伸現有的創新。 我相信,提供這種的規則改變者, 需要你至少五個奇蹟。 但問題是,無論你多聰明, 或是多有錢, 你只能頂多有一個半的奇蹟。 就是只有這麼多了。 你會耗盡你的時間、金錢、熱情, 還有其它的一切。 記得,大部分的人都沒有辦法 想像任何一種工藝上的奇蹟, 而你卻需要至少五個奇蹟來建造萬神廟。 在我的經驗中,只有少數的夢想家 能遊走在藝術設計 和科學工藝之間, 他們能夠注意到 何時其它方面的奇蹟 已經足夠來實現這個夢想。 對於他們願景的明確了解, 他們喚起勇氣和意志力, 創造剩下需要的奇蹟, 他們通常把其它人認為 不可跨越的障礙, 化為願景的特色。 以萬神廟的”眼睛”為例, 要堅持這樣的設計, 你勢必無法使用 在羅馬拱門上發展出來的結構技術。 然而,相對於 重新去思考重量和壓力的分佈, 他們用一種僅適用於天花板有一個洞的 全新設計。 完成後,不僅兼具美學, 設計上還帶來光線和涼風效果, 還有能直接連通天堂的管道。 很不賴吧。 這些人們不只是相信 不可能的事可以被完成, 而且是一定要完成。

歷史故事就先到此為止。 近代有什麼創新的事物, 是結合了獨特的設計 和先進的技術, 可以讓人們在千年之後 依舊會記得呢? 把人送上月球是很了不起的, 然後再把他平安地帶回地球,也是一樣。 談到人類的一大步: 很難想像在人類的歷史上, 有什麼可以比得上 我們第一次遠離我們自己的世界 踏上另一個世界的那一刻。

那在登月之後,會是什麼呢? 有人可能會說今天的萬神廟 是網際網路, 但我真的認為這是錯的, 或是說這只是一小部分而已。 網際網路不是萬神廟, 比較像是混凝土的發明, 對於建造萬神廟 是很重要,不可或缺的, 而且可以流傳後世, 但只有它是不夠的。 正如混凝土的技術 對萬神廟的建造是很關鍵的, 創新者使用網際網路 創造能流傳後世的全新事物。 智慧型手機是一個絕佳的例子。 很快速地,地球上大部分的人們 都會有一支, 把人們聯繫起來 和訊息連結的想法會流傳下去。

那下一步呢? 同等於萬神廟的下一個指標在哪? 想想看, 我不會接受一些似是而非 或重大突破的事物作答案, 例是治癒癌症。 為什麼?因為萬神廟 是建造在很多實體的設計之上, 每一個都來自由簡單的觀察 和不斷地體現, 而且可以無窮盡的流傳下去。 這像是藝術一樣難以用語言表達, 這些拓展生命及釋放痛苦的貢獻, 當然是不可缺少的重要, 而且很華麗, 但它們只是 我們全部知識技術的一部分, 網際網路也是。
那下一個會是什麼? 這有些違反直覺, 但我猜是一個自1930年代末期 經由每個世代的修改的 一個願景: 自動駕駛車輛。 你或許想,拜託, 怎麼會是一個比較潮的 巡航定速系統? 但你仔細看, 大部分我們的世界是設計在道路和運輸之上, 道路之於 羅馬帝國的興起, 就像州際高速公路 之於美國的 繁榮和現代化。 現今,這些串連我們世界的道路 主要都是汽車和卡車在使用, 而這個基本上一百年來 都沒有改變。 或許現今不是很明顯, 但自動駕駛車輛會是關鍵的科技 讓我們重新設計我們的城市, 更進一步地,改變文化。 為什麼呢? 一旦普及之後, 每年可以減少 美國上萬的生命的死亡, 全球有上百萬的生命。 汽車引擎的油耗和空污, 會大量的減少。 大部分在進出城市 交通擁塞將會消失。 這些會引發新的思維 重新設計我們的設計,職場, 以及我們的生活。 我們可以更快速到達目的地, 社會可以拾回大量的 資源損耗, 現在花在車陣中的時間只是在污染世界。

但為何是現在?為何現在才能實現? 因為過去三十年, 在汽車業以外的人們, 花費上億的金錢 創造出必要的奇蹟, 但當初是為了不同的目的。 這讓像是 DARPA、大學、公司企業, (註:DARPA 是美國國防高等研究計劃署) 完全置身在汽車業界之外的人們, 如果夠聰明,就會了解到 自動汽車現在可以被實現。 什麼是自動汽車需要的五個奇蹟呢? 第一個,你需要知道 現在你確切的時間和位置。 這個可以經由 GPS 解決, 全球定位系統, 是由美國政府發展的。 你需要知道道路在哪裡, 道路規則是什麼,你要往哪裡去。 這些可以由導航系統, 車內導航系統, 和網路上的地圖來達成。 你需要近端即時通訊 配合高效能的電腦運算, 來了解附近車輛 的動態。 為手機發展的無線技術 加上一點些微的改造, 就可以完成的解決這件事。 你或許想到 在社會和法律上 都可以接受的 一些道路限制。 像是高乘載車道 和其它的延伸。 但最終,你需要能辨認 人群、號誌和物體。 機械視覺、特殊感應器、高效能的電腦運算 可以做到這些, 但這還是不太夠, 尤其是當你的家人在車上。 偶爾,人類需要給予一些決定。 你或許需要喚醒乘客 詢問他們 剛剛那個在路中間的顛簸是怎麼回事。 這還不賴,我們已經有一些 新世界的輪廓了。 此外,一旦駕駛 向那困惑的汽車解釋 路上那隻在叉子上的巨大的雞 只是一家餐廳, 它就會安心的繼續行駛, 而地表上的所有其它汽車 也將從此了解這件事。

第五個奇蹟,已近乎完成, 是需要一個明確的願景, 一個佈滿自動汽車的美好世界, 吸引人和功能完善的設計, 加上許多資金和努力, 可以把它開回家。 只需幾年的時間就可以實現了, 我預期自動駕駛汽車 將會永遠的改變我們的世界 延續好幾十年。

總而言之,我開始相信 促成下一個萬神廟的各項要素 已經在我們的周圍, 等待有願景的人們, 同時具備寬廣的知識, 跨領域的技術, 及極端的熱情, 讓這個夢想成真。 但這樣的人才並不會 自動地無中生有。 他們必須從小時候 就被良好的教養和鼓勵。 我們需要愛護並幫助他們 去發掘他們的熱情。 我們需要鼓勵他們全力投入, 幫助他們了解到 失敗為成功之母, 並且要持續不懈。 我們需要幫助他們找到他們的典範, 給予他們自信,相信自己, 相信任何都是可能的, 就像我祖父帶我去購買雜貨, 就像我父母 帶我去科學博物館, 我們需要鼓勵他們去找到他們自己的道路, 即使這是非常困難。

但有一個注意事項: 我們需要定期的把他們 從現代的奇蹟拉開來, 電腦、手機、平板、 電玩、電視, 帶他們到外面, 讓他們感受到 我們的世界、星球和文化 的美好設計。 如果不這樣做,他們不會了解 這些資產的珍貴, 而有一天會輪到他們 來保護和改善。 我們也需要他們能了解 有些事物在現代科技快速發展下 沒有被得到應有的看待, 藝術和設計, 不是奢侈品, 也不是不能和 科學科技相容的。 事實上,它是讓我們與眾不同的必要。

如果有一天你有機會, 或許你可以帶你的孩子 去真正的萬神廟, 像我帶我的女兒奇拉, 去第一手體驗到 令人震驚的設計, 在羅馬的某一個不起眼的日子, 觸及到 2000 年的未來, 造就我現在的人生。
( 註:自動駕駛互聯網汽車是結合 超級電腦晶片、人工智慧、智慧型手機及通訊、各種感測器、雷達、機器人自動駕駛、GPS、人機關係、觸控螢幕許多技術之未來最大發明 )

Enhanced by Zemanta

2015年1月7日 星期三

從機器學習到人類社會結構的革命 - 20年內巨型雲端機器人將改變人類社會 ( From machine learning to human social structures Revolution - the giant cloud robot will change human society within 20 years )

It used to be that if you wanted to get a computer to do something new, you would have to program it. Now, programming, for those of you here that haven't done it yourself, requires laying out in excruciating detail every single step that you want the computer to do in order to achieve your goal. Now, if you want to do something that you don't know how to do yourself, then this is going to be a great challenge.

So this was the challenge faced by this man, Arthur Samuel. In 1956, he wanted to get this computer to be able to beat him at checkers. How can you write a program, lay out in excruciating detail, how to be better than you at checkers? So he came up with an idea: he had the computer play against itself thousands of times and learn how to play checkers. And indeed it worked, and in fact, by 1962, this computer had beaten the Connecticut state champion.

So Arthur Samuel was the father of machine learning, and I have a great debt to him, because I am a machine learning practitioner. I was the president of Kaggle, a community of over 200,000 machine learning practictioners. Kaggle puts up competitions to try and get them to solve previously unsolved problems, and it's been successful hundreds of times. So from this vantage point, I was able to find out a lot about what machine learning can do in the past, can do today, and what it could do in the future. Perhaps the first big success of machine learning commercially was Google. Google showed that it is possible to find information by using a computer algorithm, and this algorithm is based on machine learning. Since that time, there have been many commercial successes of machine learning. Companies like Amazon and Netflix use machine learning to suggest products that you might like to buy, movies that you might like to watch. Sometimes, it's almost creepy. Companies like LinkedIn and Facebook sometimes will tell you about who your friends might be and you have no idea how it did it, and this is because it's using the power of machine learning. These are algorithms that have learned how to do this from data rather than being programmed by hand.

This is also how IBM was successful in getting Watson to beat the two world champions at "Jeopardy," answering incredibly subtle and complex questions like this one. ["The ancient 'Lion of Nimrud' went missing from this city's national museum in 2003 (along with a lot of other stuff)"] This is also why we are now able to see the first self-driving cars. If you want to be able to tell the difference between, say, a tree and a pedestrian, well, that's pretty important. We don't know how to write those programs by hand, but with machine learning, this is now possible. And in fact, this car has driven over a million miles without any accidents on regular roads.

So we now know that computers can learn, and computers can learn to do things that we actually sometimes don't know how to do ourselves, or maybe can do them better than us. One of the most amazing examples I've seen of machine learning happened on a project that I ran at Kaggle where a team run by a guy called Geoffrey Hinton from the University of Toronto won a competition for automatic drug discovery. Now, what was extraordinary here is not just that they beat all of the algorithms developed by Merck or the international academic community, but nobody on the team had any background in chemistry or biology or life sciences, and they did it in two weeks. How did they do this? They used an extraordinary algorithm called deep learning. So important was this that in fact the success was covered in The New York Times in a front page article a few weeks later. This is Geoffrey Hinton here on the left-hand side. Deep learning is an algorithm inspired by how the human brain works, and as a result it's an algorithm which has no theoretical limitations on what it can do. The more data you give it and the more computation time you give it, the better it gets.

The New York Times also showed in this article another extraordinary result of deep learning which I'm going to show you now. It shows that computers can listen and understand.

(Video) Richard Rashid: Now, the last step that I want to be able to take in this process is to actually speak to you in Chinese. Now the key thing there is, we've been able to take a large amount of information from many Chinese speakers and produce a text-to-speech system that takes Chinese text and converts it into Chinese language, and then we've taken an hour or so of my own voice and we've used that to modulate the standard text-to-speech system so that it would sound like me. Again, the result's not perfect. There are in fact quite a few errors. (In Chinese) (Applause) There's much work to be done in this area. (In Chinese)

Jeremy Howard: Well, that was at a machine learning conference in China. It's not often, actually, at academic conferences that you do hear spontaneous applause, although of course sometimes at TEDx conferences, feel free. Everything you saw there was happening with deep learning. (Applause) Thank you. The transcription in English was deep learning. The translation to Chinese and the text in the top right, deep learning, and the construction of the voice was deep learning as well.

So deep learning is this extraordinary thing. It's a single algorithm that can seem to do almost anything, and I discovered that a year earlier, it had also learned to see. In this obscure competition from Germany called the German Traffic Sign Recognition Benchmark, deep learning had learned to recognize traffic signs like this one. Not only could it recognize the traffic signs better than any other algorithm, the leaderboard actually showed it was better than people, about twice as good as people. So by 2011, we had the first example of computers that can see better than people. Since that time, a lot has happened. In 2012, Google announced that they had a deep learning algorithm watch YouTube videos and crunched the data on 16,000 computers for a month, and the computer independently learned about concepts such as people and cats just by watching the videos. This is much like the way that humans learn. Humans don't learn by being told what they see, but by learning for themselves what these things are. Also in 2012, Geoffrey Hinton, who we saw earlier, won the very popular ImageNet competition, looking to try to figure out from one and a half million images what they're pictures of. As of 2014, we're now down to a six percent error rate in image recognition. This is better than people, again.

So machines really are doing an extraordinarily good job of this, and it is now being used in industry. For example, Google announced last year that they had mapped every single location in France in two hours, and the way they did it was that they fed street view images into a deep learning algorithm to recognize and read street numbers. Imagine how long it would have taken before: dozens of people, many years. This is also happening in China. Baidu is kind of the Chinese Google, I guess, and what you see here in the top left is an example of a picture that I uploaded to Baidu's deep learning system, and underneath you can see that the system has understood what that picture is and found similar images. The similar images actually have similar backgrounds, similar directions of the faces, even some with their tongue out. This is not clearly looking at the text of a web page. All I uploaded was an image. So we now have computers which really understand what they see and can therefore search databases of hundreds of millions of images in real time.

So what does it mean now that computers can see? Well, it's not just that computers can see. In fact, deep learning has done more than that. Complex, nuanced sentences like this one are now understandable with deep learning algorithms. As you can see here, this Stanford-based system showing the red dot at the top has figured out that this sentence is expressing negative sentiment. Deep learning now in fact is near human performance at understanding what sentences are about and what it is saying about those things. Also, deep learning has been used to read Chinese, again at about native Chinese speaker level. This algorithm developed out of Switzerland by people, none of whom speak or understand any Chinese. As I say, using deep learning is about the best system in the world for this, even compared to native human understanding.

This is a system that we put together at my company which shows putting all this stuff together. These are pictures which have no text attached, and as I'm typing in here sentences, in real time it's understanding these pictures and figuring out what they're about and finding pictures that are similar to the text that I'm writing. So you can see, it's actually understanding my sentences and actually understanding these pictures. I know that you've seen something like this on Google, where you can type in things and it will show you pictures, but actually what it's doing is it's searching the webpage for the text. This is very different from actually understanding the images. This is something that computers have only been able to do for the first time in the last few months.

So we can see now that computers can not only see but they can also read, and, of course, we've shown that they can understand what they hear. Perhaps not surprising now that I'm going to tell you they can write. Here is some text that I generated using a deep learning algorithm yesterday. And here is some text that an algorithm out of Stanford generated. Each of these sentences was generated by a deep learning algorithm to describe each of those pictures. This algorithm before has never seen a man in a black shirt playing a guitar. It's seen a man before, it's seen black before, it's seen a guitar before, but it has independently generated this novel description of this picture. We're still not quite at human performance here, but we're close. In tests, humans prefer the computer-generated caption one out of four times. Now this system is now only two weeks old, so probably within the next year, the computer algorithm will be well past human performance at the rate things are going. So computers can also write.

So we put all this together and it leads to very exciting opportunities. For example, in medicine, a team in Boston announced that they had discovered dozens of new clinically relevant features of tumors which help doctors make a prognosis of a cancer. Very similarly, in Stanford, a group there announced that, looking at tissues under magnification, they've developed a machine learning-based system which in fact is better than human pathologists at predicting survival rates for cancer sufferers. In both of these cases, not only were the predictions more accurate, but they generated new insightful science. In the radiology case, they were new clinical indicators that humans can understand. In this pathology case, the computer system actually discovered that the cells around the cancer are as important as the cancer cells themselves in making a diagnosis. This is the opposite of what pathologists had been taught for decades. In each of those two cases, they were systems developed by a combination of medical experts and machine learning experts, but as of last year, we're now beyond that too. This is an example of identifying cancerous areas of human tissue under a microscope. The system being shown here can identify those areas more accurately, or about as accurately, as human pathologists, but was built entirely with deep learning using no medical expertise by people who have no background in the field. Similarly, here, this neuron segmentation. We can now segment neurons about as accurately as humans can, but this system was developed with deep learning using people with no previous background in medicine.

So myself, as somebody with no previous background in medicine, I seem to be entirely well qualified to start a new medical company, which I did. I was kind of terrified of doing it, but the theory seemed to suggest that it ought to be possible to do very useful medicine using just these data analytic techniques. And thankfully, the feedback has been fantastic, not just from the media but from the medical community, who have been very supportive. The theory is that we can take the middle part of the medical process and turn that into data analysis as much as possible, leaving doctors to do what they're best at. I want to give you an example. It now takes us about 15 minutes to generate a new medical diagnostic test and I'll show you that in real time now, but I've compressed it down to three minutes by cutting some pieces out. Rather than showing you creating a medical diagnostic test, I'm going to show you a diagnostic test of car images, because that's something we can all understand.

So here we're starting with about 1.5 million car images, and I want to create something that can split them into the angle of the photo that's being taken. So these images are entirely unlabeled, so I have to start from scratch. With our deep learning algorithm, it can automatically identify areas of structure in these images. So the nice thing is that the human and the computer can now work together. So the human, as you can see here, is telling the computer about areas of interest which it wants the computer then to try and use to improve its algorithm. Now, these deep learning systems actually are in 16,000-dimensional space, so you can see here the computer rotating this through that space, trying to find new areas of structure. And when it does so successfully, the human who is driving it can then point out the areas that are interesting. So here, the computer has successfully found areas, for example, angles. So as we go through this process, we're gradually telling the computer more and more about the kinds of structures we're looking for. You can imagine in a diagnostic test this would be a pathologist identifying areas of pathosis, for example, or a radiologist indicating potentially troublesome nodules. And sometimes it can be difficult for the algorithm. In this case, it got kind of confused. The fronts and the backs of the cars are all mixed up. So here we have to be a bit more careful, manually selecting these fronts as opposed to the backs, then telling the computer that this is a type of group that we're interested in.

So we do that for a while, we skip over a little bit, and then we train the machine learning algorithm based on these couple of hundred things, and we hope that it's gotten a lot better. You can see, it's now started to fade some of these pictures out, showing us that it already is recognizing how to understand some of these itself. We can then use this concept of similar images, and using similar images, you can now see, the computer at this point is able to entirely find just the fronts of cars. So at this point, the human can tell the computer, okay, yes, you've done a good job of that.

Sometimes, of course, even at this point it's still difficult to separate out groups. In this case, even after we let the computer try to rotate this for a while, we still find that the left sides and the right sides pictures are all mixed up together. So we can again give the computer some hints, and we say, okay, try and find a projection that separates out the left sides and the right sides as much as possible using this deep learning algorithm. And giving it that hint -- ah, okay, it's been successful. It's managed to find a way of thinking about these objects that's separated out these together.

So you get the idea here. This is a case not where the human is being replaced by a computer, but where they're working together. What we're doing here is we're replacing something that used to take a team of five or six people about seven years and replacing it with something that takes 15 minutes for one person acting alone.

So this process takes about four or five iterations. You can see we now have 62 percent of our 1.5 million images classified correctly. And at this point, we can start to quite quickly grab whole big sections, check through them to make sure that there's no mistakes. Where there are mistakes, we can let the computer know about them. And using this kind of process for each of the different groups, we are now up to an 80 percent success rate in classifying the 1.5 million images. And at this point, it's just a case of finding the small number that aren't classified correctly, and trying to understand why. And using that approach, by 15 minutes we get to 97 percent classification rates.
So this kind of technique could allow us to fix a major problem, which is that there's a lack of medical expertise in the world. The World Economic Forum says that there's between a 10x and a 20x shortage of physicians in the developing world, and it would take about 300 years to train enough people to fix that problem. So imagine if we can help enhance their efficiency using these deep learning approaches?

So I'm very excited about the opportunities. I'm also concerned about the problems. The problem here is that every area in blue on this map is somewhere where services are over 80 percent of employment. What are services? These are services. These are also the exact things that computers have just learned how to do. So 80 percent of the world's employment in the developed world is stuff that computers have just learned how to do. What does that mean? Well, it'll be fine. They'll be replaced by other jobs. For example, there will be more jobs for data scientists. Well, not really. It doesn't take data scientists very long to build these things. For example, these four algorithms were all built by the same guy. So if you think, oh, it's all happened before, we've seen the results in the past of when new things come along and they get replaced by new jobs, what are these new jobs going to be? It's very hard for us to estimate this, because human performance grows at this gradual rate, but we now have a system, deep learning, that we know actually grows in capability exponentially. And we're here. So currently, we see the things around us and we say, "Oh, computers are still pretty dumb." Right? But in five years' time, computers will be off this chart. So we need to be starting to think about this capability right now.

We have seen this once before, of course. In the Industrial Revolution, we saw a step change in capability thanks to engines. The thing is, though, that after a while, things flattened out. There was social disruption, but once engines were used to generate power in all the situations, things really settled down. The Machine Learning Revolution is going to be very different from the Industrial Revolution, because the Machine Learning Revolution, it never settles down. The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before, so your previous understanding of what's possible is different.

This is already impacting us. In the last 25 years, as capital productivity has increased, labor productivity has been flat, in fact even a little bit down.

So I want us to start having this discussion now. I know that when I often tell people about this situation, people can be quite dismissive. Well, computers can't really think, they don't emote, they don't understand poetry, we don't really understand how they work. So what? Computers right now can do the things that humans spend most of their time being paid to do, so now's the time to start thinking about how we're going to adjust our social structures and economic structures to be aware of this new reality. Thank you!


2014年7月24日 星期四

2014 ~ 2015 全球機器人市場慨況 ( 2014 ~ 2015 The Survey Of Global Robotics Market )

機器人全球最大市場幾被外資壟斷

“在一棵竹子的生長過程中,前四年每年只能長3厘米,但從第五年開始,就可以每天30厘米的速度瘋狂生長,僅僅用六周時間就可長到15米。中國目前的機器人業尚處於一年長3厘米的階段。”國家863機器人技術主題組組長趙傑對中國證券報記者如此形容中國機器人業的發展現狀。

不同於資本市場機器人概念自去年開始持續“高燒”,在7月9日舉辦的中國國際機器人展覽會上,業內人士大多表現得很是冷靜。有與會者對中國證券報記者分析指出,雖然中國日前已經晉升為全球第一大機器人市場,但是所面臨的自主品牌薄弱、核心零部件研發滯后、品認知度與附加值低、低端能過剩等一系列問題卻日益突出。工信部裝備工業司副司長王衛明將此形容為“歷史機遇與困難挑戰並存”,需要從國家部委到地方政府再到微觀企業層面的共同努力加以解決。

“其實在前面的四年,竹子將根在土壤裏延伸了數百平米,中國目前的機器人業仍然處於這種扎根階段。”趙傑指出,整個業只有經過厚積薄發,熬過數年的“三厘米”,中國機器人事業才能在時機成熟之際迅速發展壯大;在可見的數年內,這種業發展的“三厘米”模式將不可跨越,中國機器人事業需要尋找可持續發展的新模式。

全球第一大機器人市場

在中國國際機器人展覽會上,中國機械工業聯合會會長王瑞祥指出,機器人被譽為製造業皇冠上的明珠,其研發、製造和應用已經成為衡量一個國家科技創新和高端製造業水平的標誌。非常可喜的是,近年來中國工業機器人銷售處於快速增長階段,2013年中國已成為全球第一大工業機器人市場。

中國機器人業聯盟與國際機器人聯合會日前通過統計信息交換的方式,第一次實現對中國工業機器人市場較為全面的統計。數據顯示,2013年國內企業在我國銷售工業機器人總量超過9500台,按可比口徑計算銷量較上年增長65.5%;外資企業在華銷售工業機器人總量超過27000台,較上年增長20%。這也就意味,2013年全年,中國市場共銷售工業機器人近37000台,約占全球銷量的五分之一,總銷量超過日本,成為全球第一大工業機器人市場。

“曾經是世界工廠的中國,如今已經發展成為全球機器人市場的最大買家。”王瑞祥分析,“2013年全國工業規模以上企業達到8.1萬戶,主營收入突破20萬億元大關。機器人銷量的持續攀升,明機器人在中國工業中的滲入程度越來越深。”

不少專業人士都看好機器人元年所帶來的業帶動效應。銀河證券相關分析師表示,我國工業機器人銷量增速過60%,大大超出了市場預期。由於全球第三次業轉移、經濟結構調整、人口紅利消失和國家政策大力扶持等因素的驅動,機器人業在行業需求、景氣度和催化劑等多方面將繼續超預期,預計在2020年之前都將保持持續高增長。

齊魯證券相關分析師經過測算認為,2014年以后,中國工業機器人年銷量約為3.5萬套,市場規模達到100億元,考慮到機器人主要應用形式為給製造業做配套,起到輔助和提升能的作用,其帶動業集群的規模將達到800到1000億元。

多重差距給行業潑“冷水”

在看到中國機器人業巨大發展空間的同時,王瑞祥話鋒一轉,指出與國際先進水平相比,中國機器人業目前仍然面臨很多差距與問題。“整體來看,中國機器人業發展速度較慢。人均保有量來看,機器人密度也依然較低。2013年底我國機器人保有量僅為萬分之二十三,還不到世界平均水平的一半。更為可悲的是,從微觀層面看,我國機器人企業的自主核心技術薄弱,關鍵零部件對外依存度較高,應用技術和品都處在摸索階段。銷售市場上,我國品的國際份額還很低,國際品牌在我國市場能夠占到對領先地位。可見,我國品市場認知度、信用度和附加值與國外品牌相比都有較大差距。”

趙傑向中國證券報記者詳細分析了中國工業機器人市場的構成情況。2013年中國機器人市場中,應用於焊接等領域的技術含量更高的多關節機器人几乎被外資所壟斷;國內企業銷售的工業機器人中,坐標型機器人是主要品,占比超過40%,數量超過外資企業在華銷售同類機器人的總量。“這表明,國機器人主要以三軸、四軸為主,仍然處於單價較低的工業機器人狀態,主要應用於對性能要求較低的領域。”

Solidiance亞太區總裁迪埃特分析,在中國機器人市場,外國機器人巨頭處於明顯的壟斷地位,盡管這幾年中國機器人業發展迅速,相關企業奮起直追,但是依然處於弱勢地位。

迪埃特所的“奮起直追”,包括成立合資企業、引進先進技術等多個方面。以上市公司為例,亞威股份將與世界頂級機器人製造商德國Kuka公司控股的Reis公司進行機器人業務合資經營和技術許可合作,Reis許可亞威股份獲得生全系列線性機器人和水平多關節機器人所需要的技術,亞威股份為此支付許可費611.1萬歐元,雙方同意由亞威股份再許可給合資公司以製造及裝配被許可品。

“國內企業應該堅持引資、引智、引技相結合,學研相結合,推動中國機器人業的發展。”王瑞祥表示。

“冷水”還不只這一盆。中金公司相關分析員介紹,正是基於國內機器人業的高增長主要體現在坐標機器人等中低端品,在各地方政府紛紛上馬機器人項目的背景下,各類型企業都想轉型發展機器人業,料想行業中低端能過剩將很快出現。

中國證券報記者在展會上同樣看到了部分企業開始採取價格競爭的方式搶奪市場的做法。以智能代步車為例,由於目前上海、天津等地都有廠家生相關品,製造企業在開拓區域市場時給代理商的價格已經開始越來越出現鬆動跡象。

資本市場輿情戰略研究員李一川指出,機器人項目投入大,周期也相對長,市場需求的增長並不是爆髮式的,短期內如果急速同質化擴張,最後大家都擁堵在業鏈最沒有價值的環節,就會帶來能過剩,使機器人行業陷入與當年光伏業同樣的境地。

沿下游往上尋求突圍

“几乎所有新興業都會經歷野蠻生長、重覆建設和一窩蜂現象。”有專家對中國證券報記者表示,希望能夠經過幾年的實踐,大浪淘沙,讓那些在技術、經濟上具備實力的企業能夠沉澱下來,使我國機器人行業進入良性發展的軌道。

“雖然中國機器人業原始創新能力不足,業生態鏈不完整,行業聚集能力薄弱,但是中國機器人業也擁有四大發展的源動力。”趙傑分析指出,在製造業業結構轉型升級方面,中國人均勞動生率遠遠落后於發達國家;在勞動力成本方面,2013年珠三角製造業企業工資增長9.2%,而2012年僅為7.6%,工資的增長和企業社保繳納的壓力也將帶來機器人業的發展良機;人口老齡化加劇,預計到2022年中國城市製造業工人將減少700萬人,農民工減少1000萬人以上,適齡人口減少也將加速以機器人換人;中國低端勞動密集型製造企業需求巨大,汽車零部件、拋光、打磨、焊接等對人體存在危害的行業都有望逐步實現以機器人換人。

中國機器人業在強敵面前尋求突破的路徑究竟何在呢?銀河證券相關分析師指出,在上中游被壟斷的情況下,從下游往上尋求突破或為一條可行路徑。“業鏈上游方面,國內還沒有能夠提供規模化且性能可靠的減速機等核心部件企業,導致國內機器人成本較高;中游方面,本體等核心技術也被外資品牌所壟斷;只有下游方面,國內系統整合商依靠本土優勢發展迅速。基於此,中國機器人業化模式較為可行的就是從整合起步逐漸向中上游拓展,即分階段實施美國模式(整合)-日本模式(核心技術)-德國模式(分工合作)。”

諮詢機構高工鋰電一高管對中國證券報記者表示,要想突破機器人國化的瓶頸,國機器人企業首先應當認準自己的行業和定位,目前國內很多企業追求大而全,但是機器人行業十分細化,需要專注的精神,國內企業對自身定位的盲目制約它們自身的發展。“實際上,龍頭企業完全可以通過與中小企業長期合作,帶動它們成長,並加強業鏈上下游的匹配與協同。此前,三星和LG都嘗試過這種做法,並取得了很好的成效。”

齊魯證券相關分析師則認為“雲”將成為機器人突圍的重要力量。“基於共享數據的雲控制系統,將基本突破時間和空間的限制,通過機器人間網絡化擴大情景對應庫,增強智能機器人理解決策能力。智能機器人將是中國機器人業發展的必然方向。”


At Sarangchae, a museum affiliated with Cheong Wa Dae, a kiosk-shaped robot greets guests with a heartwarming smile. Named Tiro, the machine introduces Korea’s culture and gives directions and answers in four different languages when visitors pick a question on its LED screen. Also on display are dancing robots, known as Metal Fighter, and the robotic dogs each called Genibo, which amuse both young and old and offer a glimpse into Korea’s advances in the industry. 

“Tiro is enjoying explosive popularity, especially among foreigners,” said Chin Hun-kook, chief marketing officer of Hanool Robotics Corp, which co-developed the project with four university research institutes. “Chinese, Japanese and other foreign tourists scramble to take pictures of themselves with it.”

For Hanool engineers, Tiro is much more than mindless metal. A couple of years ago, the humanoid was the emcee at a company engineer’s wedding. “It was a fun, very special ceremony,” Chin said. “We input speeches into Tiro with a timeline beforehand and everything went smoothly.” 

Tiro and its peers were installed in September 2010 in the presidential museum, visited by a monthly average of 70,000. They help attract more visitors, save costs on manpower, and more importantly are an advertisement of the nation’s cutting-edge technologies that made it a global leader in chips, mobile phones, TVs, display panels, and robotics that combine them all.

Korea is the world’s fourth-largest robot producer behind Japan, Germany and the U.S., controlling about 10 percent of global sales in 2009. Since 2003, the government has been boosting investment and supporting research and development in the nascent industry as one of the 10 next-generation growth engines. 

Korea is now more ambitious as the mainstay of the industry is shifting from industrial machines to service robots such as home and office helpmates, robot teachers, surgery arms and other applications in which Korea has a strong potential with its electronics and telecommunications competitiveness.

Last month, the Ministry of Knowledge Economy announced a comprehensive package to promote the fledgling service robotics industry. It will spend 30 billion won ($26.7 million) for their development, commercialization, standardization and marketing over the next seven years. 

In June, 11 consortiums of robot manufacturers and research institutions were selected to conduct six-month pilot programs into which the government has injected more than 2.1 billion won. Consortium leaders included Samsung Techwin, Nautilus Hyosung and Future Robot, the Korea Institute of Science and Technology and Kwangwoon University.

Growing market

Service robots typically assist humans with dirty, dangerous and repetitive tasks. They are used for household and office chores as well as professional tasks as surgery, navigation, milking, education, rescue, demining and military patrol operations.

With the workforce aging and labor costs rising, global sales of service robots are estimated to jump more than 26-fold to $85.5 billion in 2018 from $3.2 billion in 2008, according to the knowledge economy ministry data. The global robot market could reach $190 billion in 2020. 

Around 8.7 million service robots were sold globally in 2009, up from some 7 million year-on-year, according to the Frankfurt-based International Federation of Robotics. 

Of the total, household robots account for nearly 64 percent and entertainment and teaching robots 35 percent.

The Korean market is estimated at 1.02 trillion won in 2009, up more than 23 percent from 2008, according to the state-run Korea Institute for Robot Industry Advancement. Service robots and component manufacturing sectors are expanding by 40 percent each year, it said in a report.

Despite huge costs and technological limitations, the robotics industry is growing exponentially in line with the emergence of smarter, more human-like inventions with more diverse, specialized applications, the Samsung Economic Research Institute said.

“Service robots will leverage the entire industry’s growth,” the leading think tank said in a recent report. “Opportunities are still open for Korean companies to lead the market.” 

One of the leaders in Korea is Nautilus Hyosung, which has gained a reputation for its brainchild, Fantasia Robot, which served as a guide in the lobby of the city hall of Bucheon, Gyeonggi Province until last year. 

The 120-centimeter-tall robot allowed visitors to print parking permits and arrange meetings with city officials with a touch on its LED screen. 

Fitted with a number of ultrasonic sensors and a laser scanner that detects obstacles, it escorted people to places inside the building. 

The kiosk-shaped creation moves up to 50 centimeters per second and runs for eight hours once charged, said Kwon Yong-kwan, the firm’s robot development team chief. 

“Fantasia Robot is a mixture of robotics technology and Hyosung’s expertise in financial services solutions,” he said. “It won huge popularity and interest from numerous visitors throughout the period.” 

Industrial Robots – Worldwide Trends and Technology

The emergence of huge consumer markets in BRIC countries, in Turkey and in Middle East is expected to ensure the increasing consumer demand, which will lead to high investments in automation. Energy efficiency and Light weight construction materials are the main challenges for manufacturing industry. Global Robotics market will continue to grow due to increased demand from automotive sector which accounts for more than one-third of robot sales. Other than automotive, there is a high traction for industrial robots from sectors like Electrical and Electronics, Chemical, Pharmaceuticals and Food and Beverage industries. During the period of 2013-2015, industrial robot sales are expected to increase by 3-5% on average per year and in 2015, it is expected to touch the 200,000 units mark.

End–User demand for Industrial Robots

In 2012, there was a slight decrease in the sales of robots by 3-4% to 159,346 units primarily due to reduced demand from electrical and electronics sector. However, Electronics is still using large numbers of robots driven by the need for precision, speed and quality. The market here is growing due to the increasing production of personal electronic devices (e.g. mobile phones, ipads etc.). On the contrary, the sales for automotive sector still continue to increase worldwide by 6%.It has been so far the largest end-user segment accounting for almost 50% of the demand. In Germany, robot density (number of robots per 10,000 employees) is 1176 in automotive verses 137 across all other sectors. Though the graph has gone downhill for metal and machinery industry, it’s still on the rise for industries like Chemical and Rubber and Plastics. Due to increased industrialization in emerging economies, there is a huge demand for automation and hence there is a subsequent increase in robot sales. It will continue to grow in the near future when robots will collaborate with workers and their integration in the manufacturing process

Five countries hold 70 percent of the total robot supply in 2012

In 2012, Japan, China, United States, Korea and Germany accounted for around 70 percent of the total robot sales. Exports of industrial robots from Japan have increased by about 80% in the last five years due to the global market expansion of industrial robots. With the rise of the Chinese market, Germany and Republic Of Korea have increased exports to China more than ten times in the last five years, and Japan has more than quadrupled such exports, anticipating fiercer competition in the Chinese market. In 2012, China was the second largest robot market in the world following Japan. Although robot sales to China only slightly increased in 2012 to about 23,000 units, it is the most rapidly growing market in the world. Between 2005 and 2012, sales of industrial robots have increased by about 25% on average per year.

Advances in Robotic Technology and Applications

Microprocessor, Artificial Intelligence techniques and innovations in Automation and Control systems are among the major technological advancements that have taken place in the past decade. Controller is an integral part of any robotic system and is instrumental in performing the application tasks. A controller with higher processing power will allow more items to be added to the robot controller. Integration into a work cell becomes easier with advancement in controllers. There has been a reduction in the size of controllers and it is a trend that is expected to continue in the robotics industry. It helps robots to be more alert to deal with the rising demand for strong automation and also contributes to the declining cost of robotic systems. It also facilitates robotic safety and hence can be used in non-factory setup without any safety shield.

IRC5 is ABB’s fifth generation robot controller and is the key to robot’s performance in terms of accuracy, speed, cycle-time, programmability and synchronization with external devices.

Introduction of machines which provides the opportunity to tailor the selected robot to the requirements of the application, thereby providing a solution which is cost effective. Highly developed software for robot control ensures outstanding performance and it also makes the installation, operation and maintenance of the robot easier.
Kawasaki Robotics has introduced PC-ROSET, which is PC based simulation software for robots and allows the user to carry out robot teaching anywhere. PC-ROSET outputs accurate cycle time and the teaching data created can be sent to the robot controller for execution. ... continue to Read.

分析
  • 由於上升的龐大的消費市場產業化,實現自動化的傾向將繼續增長。中國工業機器人需求持續增加,需求排行是中國、日本、韓國、德國和美國。作為工業機器人,汽車行業和廣大的電氣和電子工業,其次是金屬和機械工業,塑料和化工行業工業需求。因此,製造商推動的生產速度和效率的限制,機器人將在提高生產力,效率發揮典型的一部分,並提高輸出,同時將降低運營成本。
  • 機器人需求持續增加,未來工作機會將減少,未來的工作形態將大幅改變
  • 台灣由於製造比重下降,因此,工業機器人需求下降? 台灣工業機器人需求是台灣製造很重要指標。