Thursday, October 5, 2017

Friday Thinking 06 Oct. 2017

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9



Every child begins their journey through life with an incredible potential: a creative mindset that approaches the world with curiosity, with questions, and with a desire to learn about the world and themselves through play.

However, this mindset is often eroded or even erased by conventional educational practices when young children enter school.

The Torrance Test of Creative Thinking is often cited as an example of how children’s divergent thinking diminishes over time. 98% of children in kindergarten are “creative geniuses” – they can think of endless opportunities of how to use a paper clip.

This ability is reduced drastically as children go through the formal schooling system and by age 25, only 3% remain creative geniuses.
Most of us only come up with one or a handful of uses for a paperclip.

What is most concerning in connection with the human capital question is that over the last 25 years, the Torrance Test has shown a decrease in originality among young children (kindergarten to grade 3).

...If we agree on the urgent need for developing skills of complex problem solving, critical thinking and creativity, it is essential that we recognise that these skills are built by learning through play across the lifespan.

This is the one skill your child needs for the jobs of the future

In many ways, the scepticism towards the potential adoption of electric vehicles is reminiscent of the early days of the cell phone market. In the early 1980s, McKinsey produced a report for AT&T on the potential world cell phone market. The report identified big hurdles to the adoption of cell phones such as bulkiness of the handsets, short duration of the battery charge, high cost per minute, and lack of coverage. The report predicted a market of 900,000 cell phones by 2000 (The Economist 1999). The actual number turned out to be 120 times larger than forecast at 109 million phones (Seba 2016).

Similar obstacles – such as high cost, lack of infrastructure, and short range – face early adopters of electric vehicles. Yet these hurdles are disappearing, lending support to the projected rise of electric vehicles. In particular, the announced price of Tesla’s Model 3 at $35,000 is about the average price of a new car sold in the US in 2015 (NADA 2015). More important, it is at the threshold price, in terms of affordability, of the Ford Model T, at which motor vehicle adoption started accelerating rapidly in early 20th century.

Today, a large-scale and rapid energy transition in the next 10 to 25 years may seem unlikely, but as the renowned futurist James Dator remarked, “decision-makers, and the general public, if they wish useful information about the future, should expect it to be unconventional and even shocking, offensive, and seemingly ridiculous…” (Slaughter 1996). As we have observed, coal did not disappear after 1960. However, it did become much less economically and geopolitically relevant. The same fate might await oil, and ours could be the last age of oil, in which oil would become the new coal.

Riding the energy transition: Oil beyond 2040

There are so many open questions to be answered and not enough hours in a day.  A major challenge is just choosing which of the exciting problems to devote time to.  Also, simply staying up to date in multiple rapidly advancing fields of research is a major challenge.  New machine learning papers are popping up so rapidly that it's impossible to read all of them.  You really have to learn to develop a strong filter.  It's also worth noting that many papers simply don't work as well as advertised.  This makes it challenging to decide which ideas to implement and build on for our DNA models.  Our original code contains loads of commented out lines of ideas that simply didn't help or made results worse and each of those lines corresponds to multi-hour or multi-day experiments.

A variety of different areas of machine learning are maturing to the point at which I expect we can see tremendous innovations in healthcare.  First and foremost, recent advances in fairness, the prevention of discrimination and bias and differential privacy in machine learning help ensure that we can apply state of the art methods in an ethical way while maintaining privacy.  This coupled with new techniques for merging disparate data sources and natural language processing will allow us to analyze medical records, discover symptoms and even predict medical outcomes.

I expect tremendous advances in medical image analysis in the next 5 years, assisting experts to significantly improve morbidity detection rates.  Deep learning methods are already enabling analysis that outperforms experts in applications such as the detection of diabetic eye disease and cancerous tumors.  I don’t see experts being replaced entirely, but instead being able to achieve higher throughput at higher accuracy with more time to focus on the hardest cases and with a strong second opinion to help minimize human error.  I expect this to spread across medical domains to, for example, cardiology where algorithms could automatically analyze EKGs taken at home.


2006 wasn’t a great year for our brains. In July, Twitter launched to the world, and in September, Facebook added the News Feed. But neither was particularly popular yet and if you look at the top 10 most popular websites that year, they were dominated by search:
  1. Yahoo
  2. Time Warner (AOL)
  3. Microsoft
  4. Google
  5. eBay
  6. Fox
  7. Amazon
  8. Ask
  9. Walmart
  10. Viacom
The only top-10 sites that year not search focused were Fox and Viacom. People in 2006 went on the Internet to find something. Even if we extend it to the top 20, the only other “entertainment” site that shows up is Disney. Entertainment sites only made up 3 of the top 20, and there were no social media sites.

Now let’s compare that to the current rankings:
  1. Google
  2. YouTube
  3. Facebook
  4. Wikipedia
  5. Yahoo!
  6. Reddit
  7. Amazon
  8. Twitter
  9. Windows Live
  10. Instagram
Google has the top spot, but now 6 of the 10 most popular sites are social or entertainment related. Whereas the 2006 Internet was used to find specific information, the 2017 Internet is more used to see what’s out there to entertain you. You don’t go to Facebook, Reddit, Twitter, or Instagram because you’re looking for something, rather, you want to see what it has found for you.

The Destructive Switch from Search to Social

There is an increasing signalling of the concept of Universal Basic Income - lots of reaction to his concept has a ‘moral’ quality about the need to earn one’s way in the world - a sense of the moral imperative of ‘having a job’. This 8 min video is a very worthwhile listen and questions that assumption which is implicitly an assumption of ‘Total Work’.

Could UBI Keep Society From Becoming a “Total Work” Dystopia?

Practical philosopher Andrew Taggart believes UBI is the ideological push we need to question why we’re so obsessed with work.

This is a fantastic 5.5 min video on Twitter - a MUST VIEW - this is clear, brilliant, courageous, leadership.

"If you can't treat someone with dignity and respect--then you need to get out."-Lt. Gen. Jay B. Silveria, Superintendent @AF_Academy

This is an important signal arising from numerous recent devastations - a decentralized - mesh or fog network can be integrated with a distributed renewable energy framework - to provide a more robust civil infrastructure. The first responders in any crisis situation are the people in the crisis situation - the key question is what types of infrastructure can best and most robustly serve people in crisis to help themselves and help the inevitable helpers provide vital support. What infrastructures can be the most robust for maintaining survivability as well as evolvable thrivability.
Under the label of “Never let a Disaster go to Waste” - there is a huge possibility now in the wake of the recent devastations wrought by nature - to develop a sort of ‘Marshall Plan’ of renewable, digital, calamity resistant infrastructure.
“Increasingly, data gathered from passive and active sensors that people carry with them, such as their mobile phones, is being used to inform situational awareness in a variety of settings,” said Kishore Ramachandran, computer science professor at Georgia Tech and lead researcher on the project, which is funded by the National Science Foundation.
“In this way, humans are providing beneficial social sensing services. However, current social sensing services depend on internet connectivity since the services are deployed on central cloud platforms.”

Computing Power on the ‘Edge’ of the Internet May Help First Responders Communicate Better During Natural Disasters

Research demonstrates ability to gather and share data without internet service
Storms like Hurricane Irma and other natural disasters bring with them lots of uncertainty: where will they go, how much damage will they cause. What is certain is that no matter where they strike, natural disasters knock out power.

And, no power means no internet for thousands of people in affected areas.
However, Georgia Tech researchers are proposing a new way of gathering and sharing information during natural disasters that does not rely on the internet.

Using computing power built into mobile phones, routers, and other hardware to create a network, emergency managers and first responders will be able to share and act on information gathered from people impacted by hurricanes, tornados, floods, and other disasters.

The Georgia Tech proposal takes advantages of edge computing. Also known as fog computing, edge computing places more processing capabilities in sensing devices – like surveillance cameras, embedded pavement sensors, and others, as well as in consumer devices like cell phones, readers, and tablets – in order to improve network latency between sensors, apps, and users.

Rather than just being able to communicate through the internet with central cloud platforms, the Georgia Tech team has demonstrated that by harnessing edge computing resources, sensing devices can be enabled to identify and communicate with other sensors in an area.

The signals around new computational paradigms, distributed (fog) computing, and distributed energy frameworks and important other drivers - as we move to harness other forms of distributed capabilities.
In June, the world’s bitcoin miners were generating roughly 5 quintillion 256-bit cryptographic hashes every second, according to the all-things-Bitcoin website That’s a 5 with 18 zeros after it, every second. No entity tracks how much power it takes to sustain that level of computation. But estimates by independent researchers suggest it’s around 500 megawatts—enough to supply roughly 325,000 homes [PDF] —with the activity concentrated in China and a few other countries with cheap energy and, in some cases, loose regulations on emissions.

The Ridiculous Amount of Energy It Takes to Run Bitcoin

Running Bitcoin uses a small city’s worth of electricity. Intel and others want to make a more sustainable blockchain
Bitcoin “miners” are electromagnetic alchemists, effectively turning megawatt-hours of electricity into the world’s fastest-growing currency. Their intensive computational activity cryptographically secures the virtual currency, approves transactions, and, in the process, creates new bitcoins for the miners, as payment.

And it does another thing, too: It uses an absolutely stunning amount of power. The ever-expanding racks of processors used by miners already consume as much electricity as a small city. It’s a problem that experts say is bad and getting worse.

The Bitcoin leech sucking on the world’s power grids has been held in check, so far, by rapid gains in the energy efficiency of mining hardware. But energy and blockchain analysts are worried about the possibility of a perfect storm: Those efficiency gains are slowing while bitcoin value is rising fast—and its potential transaction growth is immense.

There’s a silver lining, though: This troubling energy picture is inspiring innovators such as Reed to come up with energy-saving approaches that would unleash the technology behind Bitcoin, allowing it to expand into applications for which it was never intended [see “Blockchains: How They Work and Why They’ll Change the World,” in this issue]. Developers of blockchains for such disparate applications as health care management and solar-power trading see Bitcoin’s energy-intensive design as a nonstarter and are now crafting more sustainable blockchains.

Another signal of a massive change toward distributed frameworks.

China Becomes First Country in the World to Test a National Cryptocurrency

China's central bank has developed its own cryptocurrency, which is now being tested. Cryptocurrencies have the potential to not only benefit China, but the rest of the world, due to their basis in blockchain.
The potential benefits of developing a digital currency are significant, particularly in China. First, it would decrease the cost of transactions, and therefore make financial services more accessible, which would be a big help to the millions of people in the country who are unconnected to conventional banks. Second, as it would be supported by blockchain, it has the potential to decrease the rates of fraud and counterfeiting, which would be of service to the government’s attempts to reduce corruption — a key concern. Third, it would make the currency easier to obtain, which would increase the rate of international transactions, allowing for more trades and faster economic growth.

Since Bitcoin’s humble beginnings back in 2009 (when it was only valued at around 0.0007 USD) the digital currency, and the very idea of cryptocurrencies in fact, has grown monumentally. The total market cap of cryptocurrencies on April 1st of this year was over $25 Billion. A single Bitcoin is now worth more than $2,500. Now many national economies, as China’s plan shows, are considering the idea of developing their own variant.

Although China’s experimental approach to simulate a self-developed cryptocurrency’s usage is the first of its kind, other countries and institutions have made strides in that direction as well. The Deputy of Russia’s central bank has emphatically stated that “regulators of all countries agree that it’s time to develop national cryptocurrencies.” Over 260,000 stores in Japan will begin accepting Bitcoin as legal tender this summer, and big banks like Santander have announced plans to develop their own version.

China may be the first - but Japan has plans as well - both are good signals of not only new forms of currency - but of an emerging new economic paradigm.

Japanese Banks Are Planning to Launch J-Coin, a Digital Currency Meant to Kill Off Cash

Japan's central bank is backing a scheme that could see the the cash-dependent country move toward a digital currency built on blockchain technology.

The J Coin, as it's to be called, is under development by a group of Japanese banks with the blessing of financial regulators. According to the Financial Times (paywall), it's meant to launch in time for the 2020 Tokyo Olympics as a way to streamline the country's financial system.

At the moment, some 70 percent of all financial transactions in Japan use cash—a far higher amount than occurs in most developed countries, where cash has been on the decline for some time now. Relying so heavily on hard currency exacts costs in the form of transaction and handling fees, as well as the expenses associated with moving all those notes and coins around. Cash transactions are also easier to hide from regulators, and India, for example, cited shutting down the black market as one reason it decided to push aggressively towards digital money.

The idea for J-Coin is that it would sit alongside the Japanese yen, exchanged at a one-to-one rate, and be offered as a free service. In return, the banks that operate it would get detailed data on how people use it (as we've discussed before, that will indeed make people easier to track).

This is certainly a weak signal - but one that points to the trajectory of new forms of computational frameworks - and possibly could be coherent with fog computational frameworks. That said - it is also on the edge of fundamental science frontiers.

New type of supercomputer could be based on ‘magic dust’ combination of light and matter

A team of researchers from the UK and Russia have successfully demonstrated that a type of ‘magic dust’ which combines light and matter can be used to solve complex problems and could eventually surpass the capabilities of even the most powerful supercomputers.

This is another strong signal of change in the computational paradigm based on new forms of hardware - in this case it’s the Neuromorphic Chip, but other developments include quantum computing and memristor type chips.

Intel’s New Self-Learning Chip Promises to Accelerate Artificial Intelligence

An increasing need for collection, analysis and decision-making from highly dynamic and unstructured natural data is driving demand for compute that may outpace both classic CPU and GPU architectures. To keep pace with the evolution of technology and to drive computing beyond PCs and servers, Intel has been working for the past six years on specialized architectures that can accelerate classic compute platforms. Intel has also recently advanced investments and R&D in artificial intelligence (AI) and neuromorphic computing.

The Loihi research test chip includes digital circuits that mimic the brain’s basic mechanics, making machine learning faster and more efficient while requiring lower compute power. Neuromorphic chip models draw inspiration from how neurons communicate and learn, using spikes and plastic synapses that can be modulated based on timing. This could help computers self-organize and make decisions based on patterns and associations.

The Loihi test chip offers highly flexible on-chip learning and combines training and inference on a single chip. This allows machines to be autonomous and to adapt in real time instead of waiting for the next update from the cloud. Researchers have demonstrated learning at a rate that is a 1 million times improvement compared with other typical spiking neural nets as measured by total operations to achieve a given accuracy when solving MNIST digit recognition problems. Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task.

Further, it is up to 1,000 times more energy-efficient than general purpose computing required for typical training systems.
In the first half of 2018, the Loihi test chip will be shared with leading university and research institutions with a focus on advancing AI.

And here’s another strong signal - a 5 min video from Bloomberg - of the emerging new paradigm of global energy geopolitics.

The Way We Get Power Is About to Change Forever

The age of batteries is just getting started. In the latest episode of our animated series, Sooner Than You Think, Bloomberg’s Tom Randall does the math on when solar plus batteries might start wiping fossil fuels off the grid.

Here’s a signal - a 4 min video, about a breakthrough in 3D printing. This is worth the view for anyone interested in progress in this domain.

Metallurgy Breakthrough: 3D Printing High-Strength Aluminum

HRL Laboratories, LLC, has made a breakthrough in metallurgy, developing a technique for successfully 3D printing high-strength aluminum alloys — including types Al7075 and Al6061 — that opens the door to additive manufacturing of engineering-relevant alloys.

This is a must view 1.5 min video about a breakthrough on soft artificial muscles - that are cheap and relatively easy to produce.

The Muscles Are Strong Enough to Lift up to 1000 Times Their Own Weight

A Columbia University engineering team has created 3D printed muscles which are 3X stronger than real muscles.

This is a great signal for several reasons - trajectory of automation, employment, demographics and social institutions including labor influence.

The rise of robots in the German labour market

Recent research has shown that industrial robots have caused severe job and earnings losses in the US. This column explores the impact of robots on the labour market in Germany, which has many more robots than the US and a much larger manufacturing employment share. Robots have had no aggregate effect on German employment, and robot exposure is found to actually increase the chances of workers staying with their original employer. This effect seems to be largely down to efforts of work councils and labour unions, but is also the result of fewer young workers entering manufacturing careers.

Acemoglu and Restrepo develop an estimation approach from their theory, and apply it to local labour markets in the US (1993-2014). The empirical picture that emerges seems to confirm some of the darkest concerns. Specifically, they find that one additional robot reduces total employment by around three to six jobs. It also reduces average equilibrium wages for almost all groups in the labour market. So, displacement effects caused by robots seem to be widely dominant in the US.

Although robots do not affect total employment, they do have strongly negative impacts on manufacturing employment in Germany. We calculate that one additional robot replaces two manufacturing jobs on average. This implies that roughly 275,000 full-time manufacturing jobs have been destroyed by robots in the period 1994-2014. But, those sizable losses are fully offset by job gains outside manufacturing. In other words, robots have strongly changed the composition of employment by driving the decline of manufacturing jobs illustrated in Figure 1. Robots were responsible for almost 23% of this decline. But they have not been major killers so far when it comes to the total number of jobs in the German economy.

This is another breakthrough signal of change in conditions of change - in terms of automation, AI, 3D printing and medical advances.

A Chinese Robot Dentist Operated on a Human Patient for the First Time Ever

A robot dentist created in China has successfully performed dental surgery on a patient without human input. The robot implanted two artificial teeth within the margin of error required for the specific type of operation it was performing.
The South China Morning Post reports the procedure lasted about an hour, and involved the implanting of two teeth into a woman’s mouth. The artificial teeth, created using 3D printing technology, were fitted within a margin of error of 0.2 – 0.3 mm — the standard required for the type of surgery the robot was performing.

The Chinese robot dentist was created in response to a shortage of qualified human dentists and the disconcerting number of human-made errors. Dentists are always working within a small space within the mouth, and are sometimes unable to see what they’re doing.

Another important signal in the emergence of cryptocurrencies and blockchain-distributed-ledger technologies - this one from the Bank for International Settlements. A long read - useful for anyone interested in the future of currency. The article is an excellent summary of the current situation and has a very good list of current efforts to develop e-currencies. The next 10-15 years will definitely be a change in conditions of change regarding the medium of accounting for value creation and exchange.
CADcoin is an example of a wholesale CBCC. It is the original name for digital assets representing central bank money used in the Bank of Canada's proof of concept for a DLT-based wholesale payment system. CADcoin has been used in simulations performed by the Bank of Canada in cooperation with Payments Canada, R3 (a fintech firm), and several Canadian banks but has not been put into practice.
Sweden has one of the highest adoption rates of modern information and communication technologies in the world. It also has a highly efficient retail payment system. At the end of 2016, more than 5 million Swedes (over 50% of the population) had installed the Swish mobile phone app, which allows people to transfer commercial bank money with immediate effect (day or night) using their handheld device. The demand for cash is dropping rapidly in Sweden right-hand panel). Already, many stores no longer accept cash and some bank branches no longer disburse or collect cash.  

Central bank cryptocurrencies

New cryptocurrencies are emerging almost daily, and many interested parties are wondering whether central banks should issue their own versions. But what might central bank cryptocurrencies (CBCCs) look like and would they be useful? This feature provides a taxonomy of money that identifies two types of CBCC - retail and wholesale - and differentiates them from other forms of central bank money such as cash and reserves. It discusses the different characteristics of CBCCs and compares them with existing payment options.

In less than a decade, bitcoin has gone from being an obscure curiosity to a household name. Its value has risen - with ups and downs - from a few cents per coin to over $4,000. In the meantime, hundreds of other cryptocurrencies - equalling bitcoin in market value - have emerged. While it seems unlikely that bitcoin or its sisters will displace sovereign currencies, they have demonstrated the viability of the underlying blockchain or distributed ledger technology (DLT). Venture capitalists and financial institutions are investing heavily in DLT projects that seek to provide new financial services as well as deliver old ones more efficiently. Bloggers, central bankers and academics are predicting transformative or disruptive implications for payments, banks and the financial system at large.

Lately, central banks have entered the fray, with several announcing that they are exploring or experimenting with DLT, and the prospect of central bank crypto- or digital currencies is attracting considerable attention. But making sense of all this is difficult. There is confusion over what these new currencies are, and discussions often occur without a common understanding of what is actually being proposed. This feature seeks to provide some clarity by answering a deceptively simple question: what are central bank cryptocurrencies (CBCCs)?

To that end, we present a taxonomy of money that is based on four key properties: issuer (central bank or other); form (electronic or physical); accessibility (universal or limited); and transfer mechanism (centralised or decentralised). The taxonomy defines a CBCC as an electronic form of central bank money that can be exchanged in a decentralised manner known as peer-to-peer, meaning that transactions occur directly between the payer and the payee without the need for a central intermediary. This distinguishes CBCCs from other existing forms of electronic central bank money, such as reserves, which are exchanged in a centralised fashion across accounts at the central bank. Moreover, the taxonomy distinguishes between two possible forms of CBCC: a widely available, consumer-facing payment instrument targeted at retail transactions; and a restricted-access, digital settlement token for wholesale payment applications.

This is a very important signal - Very Scary. It’s a signal that we need to collectively prepare for. This is one of the new conditions in the change in conditions of change. The first instinct is to want to return to a past - a sort of restorative nostalgia. But the way forward is the need to create conditions of mutual accountability, based on reciprocal transparency - a sort of new social compact of trust - a sort of social privacy. The capacity to use the data from all of our lives as a common pool of value and potential of value creation - not for marketing - but for improving our health, welfare, social lives and our democracies. But the social compact has to build trust that no harm will be done and if harm is done - it can be transparently addressed and recourse is enabled.

I asked Tinder for my data. It sent me 800 pages of my deepest, darkest secrets

The dating app knows me better than I do, but these reams of intimate information are just the tip of the iceberg. What if my data is hacked – or sold?
At 9.24pm (and one second) on the night of Wednesday 18 December 2013, from the second arrondissement of Paris, I wrote “Hello!” to my first ever Tinder match. Since that day I’ve fired up the app 920 times and matched with 870 different people. I recall a few of them very well: the ones who either became lovers, friends or terrible first dates. I’ve forgotten all the others. But Tinder has not.

The dating app has 800 pages of information on me, and probably on you too if you are also one of its 50 million users. In March I asked Tinder to grant me access to my personal data. Every European citizen is allowed to do so under EU data protection law, yet very few actually do, according to Tinder.

Some 800 pages came back containing information such as my Facebook “likes”, my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened … the list goes on.

What will happen if this treasure trove of data gets hacked, is made public or simply bought by another company? I can almost feel the shame I would experience. The thought that, before sending me these 800 pages, someone at Tinder might have read them already makes me cringe.

As I flicked through page after page of my data I felt guilty. I was amazed by how much information I was voluntarily disclosing: from locations, interests and jobs, to pictures, music tastes and what I liked to eat. But I quickly realised I wasn’t the only one. A July 2017 study revealed Tinder users are excessively willing to disclose information without realising it.

Tinder’s privacy policy clearly states: “you should not expect that your personal information, chats, or other communications will always remain secure”. As a few minutes with a perfectly clear tutorial on GitHub called Tinder Scraper that can “collect information on users in order to draw insights that may serve the public” shows, Tinder is only being honest.

On another view - the possibilities of harnessing collective intelligence through the use of algorithmic intelligences (as new types of ‘cortex’ nodes) is another change in conditions of change. I think this Valpola is a ‘signal’ to watch.
“All of the AI which is currently in use is second rate. It's a stupid lizard brain that doesn't understand anything about the complexities of the world. So it requires a lot of data. What we want to build is more like the mammalian brain.”

“In the real world,” Valpola says, “interaction is a very scarce resource. Many of the techniques that I use and that have demonstrated amazing results are based on simulated environments. AlphaGo, for instance. It's a really great system, but the number of games that it needs to play in order to learn is – at some point it was 3,000 years of play before it reaches top human player level. That’s lifetimes.”

Thinking about AIs that can understand relationships also helps appreciate what they might become. Valpola laughs off common conundrums of AI such as the trolley problem or the paperclip maximiser. “You would have to have an enormously intelligent system be able to take over the world. And yet on the other side enormously stupid system to realize this is what they want me to do. I don't see that happening.”
Instead, he says, AIs will see themselves as part of a complex web of connections with other intelligent beings: not only AIs (as part of the internet of minds), but also human beings. They will become, for better or worse, social.

Harri Valpola dreams of an internet of beautiful AI minds

The Finnish computer scientist says he has solved AI's fundamental problem: how to make machines that plan
Harri Valpola dreams of an internet of minds. “It will look like one huge brain from our perspective,” he says, “in much the same way that the internet looks like one big thing.” That impression will be an illusion, but we will think it all the same. It is only way our limited human brains will be able to comprehend an internet of connected artificial intelligences.

Valpola has set himself the task of building this future network. But at the moment his goal seems far away. Despite all the advances made in recent years, the Finnish computer scientist is disappointed with the rate of progress in artificial intelligence.

Many in the field agree. “I've met Harri a few times, and we have similar views on AI and deep learning,” says Murray Shanahan, professor of cognitive robotics at Imperial College London. “I think that he is right,” adds Bengio. “I have myself launched a research project with a similar objective. Reaching up from our breakthroughs in perception to higher-level cognition is a crucial aspect of future progress.”

No comments:

Post a Comment