2015年1月21日 星期三

從世大、花博及頂新案看,政府的投資及金融貸款產生的工作機會真的太少,政府應該多補助創業

青年失業率攀升 未來恐成「中年貧窮族」

根據國際勞工組織統計,2013年全球年齡介於15至24歲的年輕人,有7450萬人失業,其中,青年失業率13%,是全球平均失業率的兩倍;而台灣去年的青年失業率13.17%,這個數字不僅高於全球平均,也高於南韓青年的9.75%、日本青年的6.7%

台灣大學國家發展研究所副教授辛炳隆擔憂地表示,10年後,這批青年進入中年,卻仍可能繼續深陷失業與派遣工作的惡性循環。「青年貧窮可能演變為中年貧窮。」

有人說,金融海嘯後的這一代年輕人是「失落的一代」,國際經濟環境不佳,連帶影響國內的就業市場,雇主不願多花錢聘請員工,使大量的高學歷青年失業,這些青年中,有些繼續回學校讀書,有些迫於經濟問題,只能向低薪資的派遣工作低頭。但若失業一旦超過一年,新鮮人就必須面對新一波畢業生的挑戰,而一批一批青年失業累計下來,將成為國家、社會、家庭很大的負擔。

根據國際勞工組織的統計,跟OECD十國相比,去年台灣青年失業率是全國失業率的3.2倍,僅次於義大利,更是情況惡化最嚴重的國家。

教改造就過多高學歷人口 未來恐成「中年貧窮族」

中華經濟研究院第三研究所副研究員杜英儀指出,1995啟動的教改,許多技術學院與專科升格為大學,讓台灣的大學數量從20年前的23所,暴增到今年的122所。

大學生也從24.5萬人增加到124.5萬人,大量「生產」的結果,導致台灣的勞動力結構轉為倒三角形,高學歷人口多,高中職學歷者少。

台灣大學國家發展研究所副教授辛炳隆擔憂地表示,10年後,這批青年進入中年,卻仍可能繼續深陷失業與派遣工作的惡性循環。「青年貧窮可能演變為中年貧窮。」

花謝了之後!135億元的花博真相

耗資135.91億元的花博終於結束,花謝以後,沒有了媒體購買及繽紛廣告,真相才要逐一顯現。

 郝市府以「經濟效益」、「入場人數」宣稱花博「始無前例」的成功,但是,花博的經濟效益到底多少呢?這個數字跟月亮一樣,初一十五不一樣,而且像膨風水蛙會長大。

 例如,4月11日花博閉幕日,宣稱總經濟效益高達188億;21日又成長到488億元,5月9日專案報告時,郝龍斌數字又變成經濟效益超過430億元,還出現新數字「淨效益」294億元,數字天花亂綴,好像小朋友編故事造句比賽。

 但是,實際入市庫僅13億9287萬元,再用總支出135.91億元除以入場人次896萬3666人,一名遊客平均花費1516.23納稅錢,相當於連吃30次50元便當的經費,這還不包括各公務員轉挪來花博上班的人事成本!一場博覽會為台灣留下什麼價值?為台北市留下什麼建設?移走1193棵又死掉170棵樹後,是一句「花開很漂亮」、「志工動員很感動」就夠嗎?花謝之後呢?

 花謝以後,博覽會在大同、中山區留下一堆安全堪虞的「臨時展館」,荒謬的是根本沒加設停車場因應,展場功能連地方人士都不了解。

 又例如,天使館臨時建物使用,原則以1年為限,但市府2010年1月20日卻與業者簽約九年,合作契約簽訂房地使用自博覽會營運日起至2020年4月25日止。而這些展館未來如何管理?經費何在?需要這些展館嗎?符合地方規畫與城市發展嗎?過去半年,觸目所見是花博訊息大轟炸!走入書店,雜誌封面全部是花博全攻略,打開電視廣播,影藝明星逛花博好夯,但洗腦式訊息只針對台灣善良人民,東南亞、日本、海外則效益有限,因為花博本就是假國際的內銷產物!

 市府以「大規模購買新聞與廣告文宣」舖天蓋地洗腦人民。從預算書看,單單掛了「花博」名目廣宣費用就高達四億元,其中國際宣傳0.7億元、國內宣傳3.3億元。另外,列名觀傳局之下的廣告製播、大型活動宣傳及國際行銷等費用,又花了2.7億元!國際邀展、票務行銷、資金募集等及戶外看板文宣品等小型宣傳,則也花費了1.3億元。

 郝市府在「2010年花博電視議題行銷規畫及執行案」名目之下,給各新聞台的經費,多達2394萬9060元;另外,還有「2010年度花博電視宣傳案」,更多達3350萬660元。各電視台新聞台能不看郝團隊面子嗎?其他不敢用「花博」名義而藏在「台北市政專題報導」的廣告數量更難以計數。

再來看看花博的人數政治學。

 市府很自豪以896萬3666人次入園宣稱成功,但是,其中北市市民就占了26%,也就是說,每四名遊客即有一人是台北市民來捧場。

 政府機關和財團的「動員」人數更多,部份學校甚至動員學生去看花博「第三次」,家長抗議不斷,老師因為帶去花博還可以記功獎,何樂不為。依2011年4月25日市府統計,全國學生有93萬413人次被老師動員帶來看花,其中臺北市高達44萬5389人次。

 此外,市府宣稱有6%外籍遊客,然而,市這個數字是根據「目測」、「推估」而來,也就是說,外國觀光客很可能更少。故宮平均約有18萬外籍人士來訪,若以這個訪客數來換算,那麼,171天花博展期應至少102萬人才對,但是,市府坦言外籍遊客可能僅有60萬人。

 這個數字證明「國際花博」一點都不國際,135億元換來60萬觀光客,比故宮少一半,但從郝市長專案報告看來,市府衝數字就很滿意了,那在乎辦花博原來目的是海外行銷台灣?

 花謝了,真相逐漸會出爐。以3.7億元打造的夢想館為例,場地面積竟只有3377平方公尺(1021坪),一坪造價36萬元,遠遠高於民間價格。離譜的是展館設計這麼小,每批只容35人進場,逼得大批民眾漏夜狂奔搶排隊。

 這個館一天最高只能容納3705人,展期171天總計,則只有58萬9486人曾入場,依此計算,每個參觀夢想館者耗掉627元的建造費,這樣的資源分配、建造成本浪費得讓人心疼!

 實際的情況如何呢?因為媒體宣傳必須漏夜短跑搶門票而出現類似「蛋塔效應」,然而,市府卻私下開方便門,開放特權人士免排隊可入夢想館,夢想館變成了特權館,而民眾仍老實地在苦苦排隊!

 市府坦承花博已花費135.91億元(95.3億元中央與地方專用費+市府局處配合款27.48億元+花博AIPH權利金0.17億元+稅捐0.29億元+中山足場毀約捐失0.21億+企業贊助13.8億元),當議員批判這簡直是「錢博」時,郝市長去年還爭辯反指議員錯算。怎麼市府現在才認了?原來只有時間才能讓謊言現形!135億元可以讓北市學生免費營養午餐連吃七年!(註:一年約20億元),台北市要辦國際博覽會很好,但怎麼像阿舍凱子?

 可怕的是:花博錢坑還在繼續砸錢,市府預算書顯示,2011年產發局要再編列11億元6460萬2640元,作後續園區的行銷廣告。

 用你的錢買新聞廣告洗你的腦,透過大眾媒介召喚更多人進場,再以數字配合,花納稅錢請藝人造勢,愈來愈多媒體人進入郝團隊,大花納稅錢搞包裝就是媒體人回報社會的方式嗎?政治的本質是什麼?博覽會本質是什麼?人民需求什麼?花這麼多錢難道沒有排擠效應?

 不產花的北市在花謝的冬天種花大舉行銷,正如同法蘭克福學派的擔憂,馬庫思認為,資本主義透過文化工業,提倡一種消費主義的意識形態,造成假需求,成為一種社會控制的機制(頁27,文化消費作為一種操弄,《文化消費與日常生活》John Storey 張君玫譯,巨流2001)。當國家機制耗資135億元推動這場大消費後,花開時美得讓人難忘,但花謝後,社會改變了什麼?有沒有留下博覽會的價值?博覽會的中心思想只是人數嗎?除了行銷還是行銷嗎?是膨風得誇張的各式「經濟效益」數字嗎?

 志工們當然熱情感人,台灣人動員相挺也很無私,員工犧牲假期投入,藝文團體相挺,上述都是「人民」角色而非政府角色。煙火當然燦爛,這半年來,公務員集體轉型成為迪士尼業者,公園封掉、馬路改道。但政府是受人民委託來改善人民生活,做長遠規畫建設,倡議進步概念啊!市府可曾問過市民要什麼?

 花謝了,人走了,我們來想想,博覽會在我們這一代提倡什麼價值?是否曾堅持不砍樹,追求自然理念?城市是否曾堅持不破壞如林安泰古厝的歷史建築嗎?為何古厝原味沒了反而搬來「浙江風情」水泥山?展場標榜的文創是否創造集體願景及城市主軸?十到二十天就換一批的花環保嗎?砍樹移花蓋天橋,用最高造價換來一堆臨時建築,現在,135.91億元花光光,花謝了、喧擾後,讓我們想想人民得到什麼?人民需要的其實是什麼生活呢?

柯文哲嗡嗡嗡 看世大運選手村工程

台北市長柯文哲今天對有人拿合照自稱和他很熟,盼藉此拒酒測遭警開單一事表示,應該再開一張並採最高罰額。

柯文哲今天上午到林口視察世大運選手村興建工程受訪時,媒體詢及有名陳姓男子酒駕被員警攔下拒絕酒測,竟拿出與柯文哲的握手合照並自稱「和柯市長很熟」但仍遭開單一事做以上表示。

但媒體詢及柯文哲太太陳佩琪在臉書發言內容時,柯文哲原本輕鬆談員警執法一事時,表情驟變並追問「她又寫了什麼」?

柯文哲:世大運選手村交接須釐清

(中央社記者王鴻國新北市11日電)台北市長柯文哲今天前往新北市林口視察世大運選手村工程後,希望與營建署及新北市開會,釐清選手村賽後設備的交接程序。

柯文哲今天上午到林口視察世大運選手村興建工程,只見他在大批媒體圍繞下戴著安全帽視察工地,隨後到世大運工務所會議室聽取簡報。

柯文哲會後受訪表示,選手村硬體建設進度都超前,應付世大運應沒有問題,但重點是比賽後如何移交問題,這問題應先談,若賽後雙手一攤,東西全部交給下個單位也可以,但是這也要講清楚。他對於選手村為12天比賽,在冷氣、床等設備要用掉新台幣26億元,柯文哲表示,「我非常有意見」。對於設備如何折舊及移轉,應與營建署及新北市開聯席會議討論,其中包括冷氣及床如何移交。

他表示,世大運結束後將轉為國民住宅,但它一開始是用選手村型式設計,未來轉成國民住宅時,當中可能要拆多少東西?這些都是問題,「我們設計、我們用,結束後轉給另一個去營運,這中間是沒有銜接的」。

柯文哲說,例如,只為選手村興建的餐廳,蓋完後12天後就拆掉,這讓他聽了很火大,那有這樣設計,所以設計和長期使用者中間是沒有溝通的問題,若未來是要移轉新北,這就要講清楚,但如果是台北市自己營運,那又是另一個問題。

頂新借款480億擺爛 銀行頭大

頂新集團含魏家兄弟合計國內貸款餘額達四八○億元,若三重新燕土地六十八億餘元聯貸案無法展延、爆發違約,恐如引信般點燃其他貸款跟著倒帳;頂新集團賭注銀行壓力大、擺爛不還款,企圖爭取更多高價處分資產的時間。

公股銀行債權達210億元

根據統計,頂新集團在國內貸款餘額約四八○億元(含魏家兄弟帝寶等豪宅貸款逾八億元),公股銀行約佔二一○億元,且逾五十%(約一○五億元)年底到期;最大債權銀行是兆豐商銀的一三○億元,一銀及彰銀各約卅多億及廿多億元。

其中,頂新集團及在台灣關係企業借款總額約四一一億元,包括以台北一○一股票質借部分,魏家四兄弟借款約六十九億元。根據富比士年度調查,頂新魏家四兄弟身價高達新台幣二五九二億元,光是集團母艦的康師傅上半年帳上現金就達十五.五三億美元(約四百六十五.九億新台幣);但魏家老大魏應州不同意「金援」台灣,逼著老二魏應交、老三魏應充得自行籌錢、出清台灣資產。

頂新坦言,年底百億到期貸款若無法展延,財務將面臨很大挑戰;頂率集團過去繳息信用良好,且抵押品三重新燕土地也不差,雖然積極配合銀行團趕進度標售該土地,但因標售評估的時間太短,造成很多買家不及評估而流標,無法如期還款,頂新仍積極希望銀行團能夠同意聯貸案展延。
味全也表示,三重新燕土地貸款主體為頂率公司,味全期望頂率能儘速提出因應方案,解決後續問題,同時持續積極尋求潛在買主,尋求圓滿解決,讓味全能儘速回歸本業經營,加速投入改革。

心得與評論
  • 花博我看許多次,這樣的建設真的需要仔細監督,因為花博建設真的不應該花到 135億,所以大家仔細看建設花費,那些太貴了,最大問題是沒有讓参與花博建設廠商邀請國際友客,也沒有讓花博建設廠商將賺的錢回饋台北市民,也沒有讓花博建設的錢投資創業基金讓北市補助年輕人創業,廠商結果卻是花博建商錢賺走了花博結束了台北市每年還要支付維護費變成負債, 前郝市長你們設計花博真的沒有長遠考慮實際入市庫僅13億9287萬元,再用總支出135.91億元除以入場人次896萬3666人,一名遊客平均花費1516.23納稅錢,相當於連吃30次50元便當的經費,這還不包括各公務員轉挪來花博上班的人事成本! 經濟效益太差! 創造的台北市民工作機會也太少!
  • 頂新集團借款480億,政府及金融單位低利貸款給了頂新集團,該集團卻沒有在台灣創造超過數千個工作機會,馬政府在這一點做的很差,難怪馬政府的黃金十年變成金光黨十年。
  • 政府知不知道花博、世大運、頂新集團借款全部總金額 815億,一半補助一半創業貸款就可以產生12000個微型創業合作計劃,創造 12萬工作機會,全國失業率降低約1%。
  • 世大運也一樣,沒有讓参與建商將賺的錢回饋台北市民,也沒有讓世大運建設賺的錢投資創業基金讓北市補助年輕人創業,卻像放煙火一樣,世大運結束大部分建設不再有功能,大部分台北市民聽了都搖頭,政府應花錢來創造 3000 個微型創業合作計劃,不要花大錢專搞什麼世大/花博,世大要能產生微型創業 產生工作機會才有意義,世大/花博花了將近335億卻如放煙火,產生的工作機會真的太少,KMT、DPP 都要改善這問題。人民繳的稅不是要政府全部用在政府經常性支出更不希望用在像放煙火一樣的計劃,而是希望結合政府計劃創造許多工作機會,微型創業合作計劃正是給下一代年輕人機會,政府真的很丟臉,台灣去年的青年失業率13.17%,這個數字不僅高於全球平均,也高於許多亞洲國家,大家還不思考如何救台灣下一代的年輕人,為大家下一代創造一個民主繁榮充滿工作機會與希望的家;
  • 政府收入與輸出績效看:台灣政府績效其實是很差,KMT、DPP 都要改善這問題,台灣下一代的年輕人要努力讓政治人物知道期待創造一個民主繁榮充滿希望的家,不是一直讓政府最實質輸出績效如創造之工作機會、FDI、民間投資率這麼低!
    • 政府收入:稅收、國營單位獲利、政府投資獲利
    • 政府最實質輸出績效:工作機會、FDI、民間投資率、治安、社會福利
  • 許多勞工也不知道勞工繳的稅總額占稅收24.5%,『由勞工繳的稅總額與政府由稅收支出給勞工福利之比較』,台灣政府應該是全世界對勞工最苛的,創造一個政府支出非常不公平的制度,政府建設又創造最少勞工工作機會,政府允許大量借款流向中國卻創造台灣失業率,大家怎還不思考如何救回台灣下一代的希望?

2015年1月7日 星期三

從機器學習到人類社會結構的革命 - 20年內巨型雲端機器人將改變人類社會 ( From machine learning to human social structures Revolution - the giant cloud robot will change human society within 20 years )

It used to be that if you wanted to get a computer to do something new, you would have to program it. Now, programming, for those of you here that haven't done it yourself, requires laying out in excruciating detail every single step that you want the computer to do in order to achieve your goal. Now, if you want to do something that you don't know how to do yourself, then this is going to be a great challenge.

So this was the challenge faced by this man, Arthur Samuel. In 1956, he wanted to get this computer to be able to beat him at checkers. How can you write a program, lay out in excruciating detail, how to be better than you at checkers? So he came up with an idea: he had the computer play against itself thousands of times and learn how to play checkers. And indeed it worked, and in fact, by 1962, this computer had beaten the Connecticut state champion.

So Arthur Samuel was the father of machine learning, and I have a great debt to him, because I am a machine learning practitioner. I was the president of Kaggle, a community of over 200,000 machine learning practictioners. Kaggle puts up competitions to try and get them to solve previously unsolved problems, and it's been successful hundreds of times. So from this vantage point, I was able to find out a lot about what machine learning can do in the past, can do today, and what it could do in the future. Perhaps the first big success of machine learning commercially was Google. Google showed that it is possible to find information by using a computer algorithm, and this algorithm is based on machine learning. Since that time, there have been many commercial successes of machine learning. Companies like Amazon and Netflix use machine learning to suggest products that you might like to buy, movies that you might like to watch. Sometimes, it's almost creepy. Companies like LinkedIn and Facebook sometimes will tell you about who your friends might be and you have no idea how it did it, and this is because it's using the power of machine learning. These are algorithms that have learned how to do this from data rather than being programmed by hand.

This is also how IBM was successful in getting Watson to beat the two world champions at "Jeopardy," answering incredibly subtle and complex questions like this one. ["The ancient 'Lion of Nimrud' went missing from this city's national museum in 2003 (along with a lot of other stuff)"] This is also why we are now able to see the first self-driving cars. If you want to be able to tell the difference between, say, a tree and a pedestrian, well, that's pretty important. We don't know how to write those programs by hand, but with machine learning, this is now possible. And in fact, this car has driven over a million miles without any accidents on regular roads.

So we now know that computers can learn, and computers can learn to do things that we actually sometimes don't know how to do ourselves, or maybe can do them better than us. One of the most amazing examples I've seen of machine learning happened on a project that I ran at Kaggle where a team run by a guy called Geoffrey Hinton from the University of Toronto won a competition for automatic drug discovery. Now, what was extraordinary here is not just that they beat all of the algorithms developed by Merck or the international academic community, but nobody on the team had any background in chemistry or biology or life sciences, and they did it in two weeks. How did they do this? They used an extraordinary algorithm called deep learning. So important was this that in fact the success was covered in The New York Times in a front page article a few weeks later. This is Geoffrey Hinton here on the left-hand side. Deep learning is an algorithm inspired by how the human brain works, and as a result it's an algorithm which has no theoretical limitations on what it can do. The more data you give it and the more computation time you give it, the better it gets.

The New York Times also showed in this article another extraordinary result of deep learning which I'm going to show you now. It shows that computers can listen and understand.

(Video) Richard Rashid: Now, the last step that I want to be able to take in this process is to actually speak to you in Chinese. Now the key thing there is, we've been able to take a large amount of information from many Chinese speakers and produce a text-to-speech system that takes Chinese text and converts it into Chinese language, and then we've taken an hour or so of my own voice and we've used that to modulate the standard text-to-speech system so that it would sound like me. Again, the result's not perfect. There are in fact quite a few errors. (In Chinese) (Applause) There's much work to be done in this area. (In Chinese)

Jeremy Howard: Well, that was at a machine learning conference in China. It's not often, actually, at academic conferences that you do hear spontaneous applause, although of course sometimes at TEDx conferences, feel free. Everything you saw there was happening with deep learning. (Applause) Thank you. The transcription in English was deep learning. The translation to Chinese and the text in the top right, deep learning, and the construction of the voice was deep learning as well.

So deep learning is this extraordinary thing. It's a single algorithm that can seem to do almost anything, and I discovered that a year earlier, it had also learned to see. In this obscure competition from Germany called the German Traffic Sign Recognition Benchmark, deep learning had learned to recognize traffic signs like this one. Not only could it recognize the traffic signs better than any other algorithm, the leaderboard actually showed it was better than people, about twice as good as people. So by 2011, we had the first example of computers that can see better than people. Since that time, a lot has happened. In 2012, Google announced that they had a deep learning algorithm watch YouTube videos and crunched the data on 16,000 computers for a month, and the computer independently learned about concepts such as people and cats just by watching the videos. This is much like the way that humans learn. Humans don't learn by being told what they see, but by learning for themselves what these things are. Also in 2012, Geoffrey Hinton, who we saw earlier, won the very popular ImageNet competition, looking to try to figure out from one and a half million images what they're pictures of. As of 2014, we're now down to a six percent error rate in image recognition. This is better than people, again.

So machines really are doing an extraordinarily good job of this, and it is now being used in industry. For example, Google announced last year that they had mapped every single location in France in two hours, and the way they did it was that they fed street view images into a deep learning algorithm to recognize and read street numbers. Imagine how long it would have taken before: dozens of people, many years. This is also happening in China. Baidu is kind of the Chinese Google, I guess, and what you see here in the top left is an example of a picture that I uploaded to Baidu's deep learning system, and underneath you can see that the system has understood what that picture is and found similar images. The similar images actually have similar backgrounds, similar directions of the faces, even some with their tongue out. This is not clearly looking at the text of a web page. All I uploaded was an image. So we now have computers which really understand what they see and can therefore search databases of hundreds of millions of images in real time.

So what does it mean now that computers can see? Well, it's not just that computers can see. In fact, deep learning has done more than that. Complex, nuanced sentences like this one are now understandable with deep learning algorithms. As you can see here, this Stanford-based system showing the red dot at the top has figured out that this sentence is expressing negative sentiment. Deep learning now in fact is near human performance at understanding what sentences are about and what it is saying about those things. Also, deep learning has been used to read Chinese, again at about native Chinese speaker level. This algorithm developed out of Switzerland by people, none of whom speak or understand any Chinese. As I say, using deep learning is about the best system in the world for this, even compared to native human understanding.

This is a system that we put together at my company which shows putting all this stuff together. These are pictures which have no text attached, and as I'm typing in here sentences, in real time it's understanding these pictures and figuring out what they're about and finding pictures that are similar to the text that I'm writing. So you can see, it's actually understanding my sentences and actually understanding these pictures. I know that you've seen something like this on Google, where you can type in things and it will show you pictures, but actually what it's doing is it's searching the webpage for the text. This is very different from actually understanding the images. This is something that computers have only been able to do for the first time in the last few months.

So we can see now that computers can not only see but they can also read, and, of course, we've shown that they can understand what they hear. Perhaps not surprising now that I'm going to tell you they can write. Here is some text that I generated using a deep learning algorithm yesterday. And here is some text that an algorithm out of Stanford generated. Each of these sentences was generated by a deep learning algorithm to describe each of those pictures. This algorithm before has never seen a man in a black shirt playing a guitar. It's seen a man before, it's seen black before, it's seen a guitar before, but it has independently generated this novel description of this picture. We're still not quite at human performance here, but we're close. In tests, humans prefer the computer-generated caption one out of four times. Now this system is now only two weeks old, so probably within the next year, the computer algorithm will be well past human performance at the rate things are going. So computers can also write.

So we put all this together and it leads to very exciting opportunities. For example, in medicine, a team in Boston announced that they had discovered dozens of new clinically relevant features of tumors which help doctors make a prognosis of a cancer. Very similarly, in Stanford, a group there announced that, looking at tissues under magnification, they've developed a machine learning-based system which in fact is better than human pathologists at predicting survival rates for cancer sufferers. In both of these cases, not only were the predictions more accurate, but they generated new insightful science. In the radiology case, they were new clinical indicators that humans can understand. In this pathology case, the computer system actually discovered that the cells around the cancer are as important as the cancer cells themselves in making a diagnosis. This is the opposite of what pathologists had been taught for decades. In each of those two cases, they were systems developed by a combination of medical experts and machine learning experts, but as of last year, we're now beyond that too. This is an example of identifying cancerous areas of human tissue under a microscope. The system being shown here can identify those areas more accurately, or about as accurately, as human pathologists, but was built entirely with deep learning using no medical expertise by people who have no background in the field. Similarly, here, this neuron segmentation. We can now segment neurons about as accurately as humans can, but this system was developed with deep learning using people with no previous background in medicine.

So myself, as somebody with no previous background in medicine, I seem to be entirely well qualified to start a new medical company, which I did. I was kind of terrified of doing it, but the theory seemed to suggest that it ought to be possible to do very useful medicine using just these data analytic techniques. And thankfully, the feedback has been fantastic, not just from the media but from the medical community, who have been very supportive. The theory is that we can take the middle part of the medical process and turn that into data analysis as much as possible, leaving doctors to do what they're best at. I want to give you an example. It now takes us about 15 minutes to generate a new medical diagnostic test and I'll show you that in real time now, but I've compressed it down to three minutes by cutting some pieces out. Rather than showing you creating a medical diagnostic test, I'm going to show you a diagnostic test of car images, because that's something we can all understand.

So here we're starting with about 1.5 million car images, and I want to create something that can split them into the angle of the photo that's being taken. So these images are entirely unlabeled, so I have to start from scratch. With our deep learning algorithm, it can automatically identify areas of structure in these images. So the nice thing is that the human and the computer can now work together. So the human, as you can see here, is telling the computer about areas of interest which it wants the computer then to try and use to improve its algorithm. Now, these deep learning systems actually are in 16,000-dimensional space, so you can see here the computer rotating this through that space, trying to find new areas of structure. And when it does so successfully, the human who is driving it can then point out the areas that are interesting. So here, the computer has successfully found areas, for example, angles. So as we go through this process, we're gradually telling the computer more and more about the kinds of structures we're looking for. You can imagine in a diagnostic test this would be a pathologist identifying areas of pathosis, for example, or a radiologist indicating potentially troublesome nodules. And sometimes it can be difficult for the algorithm. In this case, it got kind of confused. The fronts and the backs of the cars are all mixed up. So here we have to be a bit more careful, manually selecting these fronts as opposed to the backs, then telling the computer that this is a type of group that we're interested in.

So we do that for a while, we skip over a little bit, and then we train the machine learning algorithm based on these couple of hundred things, and we hope that it's gotten a lot better. You can see, it's now started to fade some of these pictures out, showing us that it already is recognizing how to understand some of these itself. We can then use this concept of similar images, and using similar images, you can now see, the computer at this point is able to entirely find just the fronts of cars. So at this point, the human can tell the computer, okay, yes, you've done a good job of that.

Sometimes, of course, even at this point it's still difficult to separate out groups. In this case, even after we let the computer try to rotate this for a while, we still find that the left sides and the right sides pictures are all mixed up together. So we can again give the computer some hints, and we say, okay, try and find a projection that separates out the left sides and the right sides as much as possible using this deep learning algorithm. And giving it that hint -- ah, okay, it's been successful. It's managed to find a way of thinking about these objects that's separated out these together.

So you get the idea here. This is a case not where the human is being replaced by a computer, but where they're working together. What we're doing here is we're replacing something that used to take a team of five or six people about seven years and replacing it with something that takes 15 minutes for one person acting alone.

So this process takes about four or five iterations. You can see we now have 62 percent of our 1.5 million images classified correctly. And at this point, we can start to quite quickly grab whole big sections, check through them to make sure that there's no mistakes. Where there are mistakes, we can let the computer know about them. And using this kind of process for each of the different groups, we are now up to an 80 percent success rate in classifying the 1.5 million images. And at this point, it's just a case of finding the small number that aren't classified correctly, and trying to understand why. And using that approach, by 15 minutes we get to 97 percent classification rates.
So this kind of technique could allow us to fix a major problem, which is that there's a lack of medical expertise in the world. The World Economic Forum says that there's between a 10x and a 20x shortage of physicians in the developing world, and it would take about 300 years to train enough people to fix that problem. So imagine if we can help enhance their efficiency using these deep learning approaches?

So I'm very excited about the opportunities. I'm also concerned about the problems. The problem here is that every area in blue on this map is somewhere where services are over 80 percent of employment. What are services? These are services. These are also the exact things that computers have just learned how to do. So 80 percent of the world's employment in the developed world is stuff that computers have just learned how to do. What does that mean? Well, it'll be fine. They'll be replaced by other jobs. For example, there will be more jobs for data scientists. Well, not really. It doesn't take data scientists very long to build these things. For example, these four algorithms were all built by the same guy. So if you think, oh, it's all happened before, we've seen the results in the past of when new things come along and they get replaced by new jobs, what are these new jobs going to be? It's very hard for us to estimate this, because human performance grows at this gradual rate, but we now have a system, deep learning, that we know actually grows in capability exponentially. And we're here. So currently, we see the things around us and we say, "Oh, computers are still pretty dumb." Right? But in five years' time, computers will be off this chart. So we need to be starting to think about this capability right now.

We have seen this once before, of course. In the Industrial Revolution, we saw a step change in capability thanks to engines. The thing is, though, that after a while, things flattened out. There was social disruption, but once engines were used to generate power in all the situations, things really settled down. The Machine Learning Revolution is going to be very different from the Industrial Revolution, because the Machine Learning Revolution, it never settles down. The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before, so your previous understanding of what's possible is different.

This is already impacting us. In the last 25 years, as capital productivity has increased, labor productivity has been flat, in fact even a little bit down.

So I want us to start having this discussion now. I know that when I often tell people about this situation, people can be quite dismissive. Well, computers can't really think, they don't emote, they don't understand poetry, we don't really understand how they work. So what? Computers right now can do the things that humans spend most of their time being paid to do, so now's the time to start thinking about how we're going to adjust our social structures and economic structures to be aware of this new reality. Thank you!


2014年12月29日 星期一

從 Gmail、Line、Twitter 大幅在中國被封鎖,五年後大陸將是全球最極權不民主國家,中國下一代也將不易了解人民當家及中共極權之差異

Gmail中國遭封 網民表擔憂和不滿
中共也離譜是中國大陸年輕人在
使用 Gmail, 有些批評中共做事太
政治, 又不是 Google 喜歡給使用者
如此用, 但致少 Google 尊重使用者
不會像中共政權如此不尊重人民
思想的自由, 亂封鎖 Gmail, Line,
Twitter; 致少要有證據才能封鎖
某用戶, 五年後大陸將是全球最
極權國家, 也是最令人覺得奇怪
且羞愧的國家

有消息顯示Gmail在中國被封;有大量使用Gmail中國網民無法從自己電郵客戶端收發郵件,也讓很多網民感到不滿。Gamil網頁版在中國經常受干擾,連接很不穩定。很多不擅於翻牆的用戶不得不轉移到電腦或移動設備的電郵客戶端。這些電郵客戶端使用IMAP和POP3協議軟件,而不是通過Gmail的網絡版。

如今這些客戶端渠道也都被封鎖了。通過Gmail在中國的實時流量來看,從12月27日開始幾乎沒有什麼網民可以使用Gmail。中國很多網民都通過微博和推特等社交網站表達了自己的擔憂和不滿。洛之秋: 不少學生目前正是出國留學申請的關鍵時期,填寫的聯繫信箱都是gmail,這樣的封鎖將要給學生們和海外高校聯繫帶來極大不便。這段經歷大概會讓他們多少年後,在思考是否回國時更義無反顧。

劉興量在微博上表示,Gmail徹底失聯了,用簡單的話來解釋就是:以前Gmail只是網頁不能訪問,但是在手機以及電腦的客戶端上還是可以接受和發送郵件的。而現在,就不要奢望的你的郵件客戶端裏的郵件更新了。從來1840網友表示,我都不想說什麼好,只想彪髒話。天天搞些表面工程,到處貼的比牛皮癬還多。哎中國特色,最近Gmail也被封了 。

HeySuperman-伍壕網友說,別說Gmail了,Google相關的一切服務都用不了,用個郵箱也被封了,被噁心壞了,真草泥馬的煩,給生活帶了很多不便。修個圍牆把中國圍起來好了,這也符合我們的傳統乾脆閉關鎖國吧,外界信息全部封鎖掉跟朝鮮一樣多好有名為竹芒Reason的網友指出,Gmail國內徹底被封,國內的人不僅在網頁端、各種軟件客戶端收不到郵件,連發送到Gmail的郵件也會被強……這一招真尼瑪太狠了,連在國外的人也一塊被強了……得心裏多變態的人才會想出這種招啊

簡單說兩句網民:1、gmail不聲不響的就被封了2、武媚娘傳奇被叫停。只想問一句:為什麼?

名叫吳醫師的網友說:在國內,谷歌已被全線封殺,Gmail是孩子們喜歡用的郵箱,也早被封停了。雅虎也是時開時關,有些文章標題在但卻不能訪問,還有其它...宣傳部門不僅威武而且還滑頭。嘿嘿! 資深帥鍋網民則表示,商務部說2015年中國將改善外商投資環境,GMAIL被封死了,老外郵件都收不到了,改善了什麼環境?呵呵! 谷歌在2010年3月關閉在中國大陸的搜索服務google.cn,。就在今年六四事件25週年前夕,谷歌在中國的服務遭到封鎖,包括Gmail在內的谷歌公司的各項服務都在中國無法正常訪問和使用。

中國大陸封鎖LINE、Facebook、Google、Twitter ,竟是全世界唯一如此害怕民主的國家? ( Mainland China block LINE, Facebook, Google, Twitter, China became one of country so scary to democracy )
中國大陸總負債 2012 達到GDP 277%, 到達 2014
就會超越德國, 預估到 2016 極可能與義大利一樣成為高負債國
 

早前網上傳出消息,指剛剛開始打進中國市場的即時通訊程式LINE(中國官方名稱為「連我」),由7月1日起無法在內地使用,惹來當局出手干預程式的疑雲。此舉令一向逆來順受的內地網民終於怒起來,批判中央政府剝削他們的選擇權,更擔心甚受歡迎的相片分享程式 Instagram 同樣會被封殺,令中國網民與世界脫軌。

中國的GFW(Great Fire Wall,國家防火牆)一直被「譽」為中國的虛擬萬里長城,針對某些關鍵字、IP位址及特定網站作出過濾及屏蔽,最為人熟悉的就是GFW封鎖香港人常用的影片網站YouTube、社交網站facebook及搜尋器Google等。最近有中國網民報告指,在7月1日起,中國用家開啟LINE傳送訊息後,只會在訊息旁邊顯示一個感歎號,意指訊息未能傳送。有網民嘗試多次刪除軟件再重新安裝,皆面對同樣問題,有網友擔心LINE的聊天紀錄及特別購買的表情圖案皆會消失,便在LINE的官方微博中求助。LINE透過官方微博表示,「現在LINE中國用戶出現了訪問障礙,我們正在盡最大努力修復該問題。」

... 持續閱讀

分析
  • 中國大陸封鎖 LINE、Facebook、Google、Twitter 做法其實不是很恰當,以全世界民主國家成長程度,中國大陸民主是不如非洲,這是一種恥辱,有必要改善讓人民有社群活動自由,全世界少有政府如此極權與嚴格控制人民社群活動;2019 ~ 2020 後當中國大陸人民都比非洲澳養的羊還不自由那不是很丟臉中國國家領導人該給人民基本民主自由,就像中國國家領導人給自已兒女基本民主自由一樣,這才會是偉大的事
  • 依中國大陸國民所得成長率及教育普及率,人民有社群活動自由及人民有基本的民主選舉是應當做的,中國大陸的民主才是全球華人的榮耀,中國大陸的不民主及網路封鎖是全球華人的恥辱;只有中共高層子女可以在歐美過著自由民主生活,卻阻擋13億中國大陸人民有基本的民主,這顯然是非常大的恥辱
  • 依中國大陸對中國年輕人的網路封鎖將逐漸延伸至思想監控中國大陸下一代年輕人將不易了解人民當家及中共極權之差異,台灣則需更小心勿太靠近中國大陸多擴充其他國家之貿易及人民關係才是保護中華民台灣之法寶