Thursday, October 27, 2016

Friday Thinking 21 Oct. 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.


“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9
Content
Quotes

Articles
Bad Robots



Vehicles and the ways they are used are expected to change more over the next two decades than in the last 100 years, propelled by the new mobility trends of vehicle electrification, shared mobility and autonomous driving. Additional factors, such as access to public transit, air quality concerns, urbanization and the decentralization of the energy system are also triggering changes in the mobility sector.

Cities Could See $45B Benefits From Electric, Shared and Autonomous Cars



“If I were asked to condense the whole of the present century into one mental picture,” the novelist J. G. Ballard wrote in 1971, “I would pick a familiar everyday sight: a man in a motor car, driving along a concrete highway to some unknown destination. Almost every aspect of modern life is there, both for good and for ill — our sense of speed, drama, and aggression, the worlds of advertising and consumer goods, engineering and mass-manufacture, and the shared experience of moving together through an elaborately signaled landscape.” In other words: Life is a highway. And the highway, Ballard believed, was a bloody, beautiful mess.

For Ballard, the car posed a beguiling paradox. How could it be such an erotic object, at once muscular and voluptuous, virginal and “fast,” while also being one of history’s deadliest inventions? Was its popularity simply a triumph of open-road optimism — a blind trust that the crash would only ever happen to someone else? Ballard thought not. His hunch was that, on some level, drivers are turned on by the danger, and perhaps even harbor a desire to be involved in a spectacular crash. A few years later, this notion would unfurl, like a corpse flower, into Crash, his incendiary novel about a group of people who fetishize demolished cars and mangled bodies.

Over the course of a century, Ballard wrote, the “perverse technology” of the automobile had colonized our mental landscape and transformed the physical one. But he sensed that the car’s toxic side effects — the traffic, the carnage, the pollution, the suburban sprawl — would soon lead to its demise. At some point in the middle of the 21st century, he wrote, human drivers would be replaced with “direct electronic control,” and it would become illegal to pilot a car. The sensuous machines would be neutered, spayed: stripped of their brake pedals, their accelerators, their steering wheels. ­Driving, and with it, car culture as we know it, would end. With the exception of select “motoring parks,” where it would persist as a nostalgic curiosity, the act of actually steering a motor vehicle would become an anachronism.

In Ballard’s grim reckoning, the end of driving would be just one step in our long march toward the “benign dystopia” of rampant consumerism and the surveillance state, in which people willingly give up control of their lives in exchange for technological comforts. The car, flawed as it was, functioned as a bulwark against “the remorseless spread of the regimented, electronic society.” “The car as we know it now is on the way out,” Ballard wrote. “To a large extent I deplore its passing, for as a basically old-fashioned machine it enshrines a basically old-fashioned idea — freedom.”

What Happens to American Myth When You Take the Driver Out of It?

The self-driving car and the future of the self.



In the 19th century the word for a strike was “taking tools out of the shop”. In the 20th century the management owned the tools. In the 21st century the tools are commonly owned, maintained and free.
Another way of understanding privatisation is to say: how can we do public services as expensively as possible?

Paul Mason - Postcapitalism and the city

Keynote at Barcelona Initiative for Technological Sovereignty



Here’s another weak signal - don’t let the failure fool you - it’s not over till the general will of the people holds the day. This is a 3 min video - well worth the view.

The astounding story of Iceland's constitution - and its government's failure

In 2012, more than two-thirds of Iceland's population ratified the most democratically crafted constitution in world history, written in public and drafted by a representative committee of 1,000 Icelanders; now in a stirring video in the leadup to the next national election, Icelanders are calling on one another to only vote for candidates who'll take action on the constitution the nation voted for.


The huge cognitive dissonance evident in neo-liberal economics’ rational agent and the huge role that marketing plays in our economy is stunning. Homo Economus knows what they want and rationally optimizes choices based on available information. The target of marketing is the object of incantations - the creation of a consumer is primary to create demand for what corporations determine is easy to manufacture - easy examples - bottled water and mass-produced sugar water over public drinking fountains and cheap tea. This is a 12 min video outlining science behind the incantations that produce desire in all of us for what’s easy to produce profit.

Science Of Persuasion

This animated video describes the six universal Principles of Persuasion that have been scientifically proven to make you most effective as reported in Dr. Cialdini’s groundbreaking book, Influence. This video is narrated by Dr. Robert Cialdini and Steve Martin, CMCT (co-author of YES & The Small Big).

About Robert Cialdini:
Dr. Robert Cialdini, Professor Emeritus of Psychology and Marketing, Arizona State University has spent his entire career researching the science of influence earning him a worldwide reputation as an expert in the fields of persuasion, compliance, and negotiation.


I think we will be seeing a lot more of this sort of analysis - despite efforts to provide only minimal information about the web of relationships that entangle all corporations - and in fact all of us - the data trails we leave in the digital environment eventually become transparent. This is a great 10 min read.
There’s no way for a user to simply download its entire database. “So we made a web crawler,” says Mizuno. “It’s a tool that goes to their website, searches for a company, and downloads that one company’s business relationship list. Then it repeats the searching and downloading for all the other companies. It was difficult.”

“This opens up a whole area of data science applied at a very large scale. Imagine building a map of connectivity in and among firms in Europe, then anticipating what that network would look like after the UK’s exit from the EU. A picture of how the continent’s trading relationships are going to evolve would speak volumes to [UK Prime Minister] May as to the consequences of the Brexit.”

But if his network could reveal the costs of an economic mistake like Brexit, thought Mizuno, what if he applied it to a genuine, humanitarian disaster?

The Physicist Who Sees Crime Networks

A lone Japanese scientist is discovering the shady ties that connect companies engaged in illegal trade.
Takayuki Mizuno is an econophysicist at Japan’s National Institute of Informatics, and an unlikely heir to Holmes’s deerstalker. His office overlooks the Imperial Palace in Tokyo, for centuries a symbol of stability and order. From it the young scientist surveys the world, applying the tools of physics to the study of economic and social systems. He has created a software to spot stock market bubbles, and a digital measuring stick for charting the progress of start-ups.

Now Mizuno believes he might be able to use the same technologies to unravel criminal networks and track the business ties of terrorists. Mizuno was surprised to find that companies behave rather like people. Like the urban myth of there being six degrees of separation between Kevin Bacon and any other actor, Mizuno found that 80% of the world’s firms could be connected to any other business via six customers or suppliers. For example, Elpitiya Plantations, a producer of fine teas in Sri Lanka, is linked to financial behemoth Western Union by hopping from a hotel chain to a fertilizer company to food giant Nestlé to bargain US retailer Dollar General.

Mizuno also found that companies naturally cluster together into communities, with stronger trading links within the community than without. Mizuno expected to see political and economic organizations, such as the EU or NAFTA, reflected in his data. Instead, he found nearly 3,500 communities with only loose geographic or industrial ties.

Mizuno is now also tracing the movements of arms parts and conflict oil in the global marketplace, using a massive networked analysis of blacklisted companies. “We cannot [directly] investigate smuggling using official supply chain data,” says Mizuno. “But some undesirable goods are distributed legally through third countries. Using my model and the data, we can find the good, clean companies and the bad ones.” He expects to publish a paper on this next year.


This is an old - 2010 essay by Bruce Sterling one of my most favorite sci-fi writers and a definitely acerbic journalist of the future. This is well worth the read - although it’s a 23 min one - for anyone interested in the past prognosis of crypto-approaches to the future of security.
So Wikileaks is a manifestation of something that has been growing all around us, for decades, with volcanic inexorability. The NSA is the world’s most public unknown secret agency. And for four years now, its twisted sister Wikileaks has been the world’s most blatant, most publicly praised, encrypted underground site.
Wikileaks is “underground” in the way that the NSA is “covert”; not because it’s inherently obscure, but because it’s discreetly not spoken about.
The NSA is “discreet,” so, somehow, people tolerate it. Wikileaks is “transparent,” like a cardboard blast shack full of kitchen-sink nitroglycerine in a vacant lot.

The Blast Shack

The Wikileaks Cablegate scandal is the most exciting and interesting hacker scandal ever. I rather commonly write about such things, and I’m surrounded by online acquaintances who take a burning interest in every little jot and tittle of this ongoing saga. So it’s going to take me a while to explain why this highly newsworthy event fills me with such a chilly, deadening sense of Edgar Allen Poe melancholia.

But it sure does.
Part of this dull, icy feeling, I think, must be the agonizing slowness with which this has happened. At last — at long last — the homemade nitroglycerin in the old cypherpunks blast shack has gone off. Those “cypherpunks,” of all people.

Way back in 1992, a brainy American hacker called Timothy C. May made up a sci-fi tinged idea that he called “The Crypto Anarchist Manifesto.” This exciting screed — I read it at the time, and boy was it ever cool — was all about anonymity, and encryption, and the Internet, and all about how wacky data-obsessed subversives could get up to all kinds of globalized mischief without any fear of repercussion from the blinkered authorities. If you were of a certain technoculture bent in the early 1990s, you had to love a thing like that.


This is an amazing but brief demonstration of Google’s new deep learning voice simulator - there is both an explanation and samples. It won’t be long before we can access any book we want with a text to speech function that sound entirely human. This is definitely worth the view and the listen.

WaveNet: A Generative Model for Raw Audio

This post presents WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%.

We also demonstrate that the same network can be used to synthesize other audio signals such as music, and present some striking samples of automatically generated piano pieces.

Allowing people to converse with machines is a long-standing dream of human-computer interaction. The ability of computers to understand natural speech has been revolutionised in the last few years by the application of deep neural networks (e.g., Google Voice Search). However, generating speech with computers  — a process usually referred to as speech synthesis or text-to-speech (TTS) — is still largely based on so-called concatenative TTS, where a very large database of short speech fragments are recorded from a single speaker and then recombined to form complete utterances. This makes it difficult to modify the voice (for example switching to a different speaker, or altering the emphasis or emotion of their speech) without recording a whole new database.


This belongs to the ongoing development of AI as well as to the ‘Moore’s Law is Dead - Long LIve Moore’s Law’ file.

Self-Learning AI: This New Neuro-Inspired Computer Trains Itself

A COMPUTING ROAD LESS TRAVELED
A team of researchers from Belgium think that they are close to extending the anticipated end of Moore’s Law, and they didn’t do it with a supercomputer. Using an artificial intelligence (AI) algorithm called reservoir computing, combined with another algorithm called backpropagation, the team developed a neuro-inspired analog computer that can train itself and improve at whatever task it’s performing.

Reservoir computing is a neural algorithm that mimics the brain’s information processing abilities. Backpropagation, on the other hand, allows for the system to perform thousands of iterative calculations that reduce error, which lets the system improve its solution to a problem.

“Our work shows that the backpropagation algorithm can, under certain conditions, be implemented using the same hardware used for the analog computing, which could enhance the performance of these hardware systems,”Piotr Antonik explains.
Antonik, together with Michiel Hermans, Marc Haelterman, and Serge Massar at the Université Libre de Bruxelles in Brussels, Belgium, published their study on this self-learning hardware in the journal Physical Review Letters.


Here is another signal of the emerging domain of Big Data Science - one that both incorporates Algorithm learning and is likely to emerges as part of the ubiquitous Noosphere of knowledge. There is Too Much To Know - and we all need ways to enhance our minds - if the Internet is the global nervous system - Algorithmic learning is the new global neocortex.
EOL might be able to change that by applying state-of-the-art computational power to disparate collections of biological data. The project is a free and open digital collection of biodiversity facts, articles and multimedia, one of the largest in the world. Headquarted at the Smithsonian Institution and with its 357 partners and content providers including Harvard University and the New Library of Alexandria in Egypt, EOL has grown from 30,000 pages when it launched in 2008 to more than 2 million, with 1.3 million pages of text, maps, video, audio and photographs, and supports 20 languages.

Big Data Just Got Bigger as IBM's Watson Meets the Encyclopedia of Life

An NSF grant marries one of the world's largest online biological archives with IBM's cognitive computing and Georgia Tech's moduling and simulation
After 2,000 years, the ultimate encyclopedia of life is at the cusp of a new data-driven era. A grant from the National Science Foundation has been awarded to The Encyclopedia of Life (EOL), IBM and Georgia Institute of Technology. The grant will enable massive amounts of data to be processed and cross-indexed in ways that will allow groundbreaking science to be done.

In the year 77 AD, Pliny the Elder began writing the world's first encyclopedia, Natural History. It included everything from astronomy to botany to zoology to anthropology and more. Pliny attempted to put everything he could personally gather about the natural world into a single written work. For the last 2,000 years, a long succession of scientists inspired by Pliny have pursued the same vision.

Pliny included 20,000 topics in 36 volumes but ran into the limitations of what a single person can discover, record and process within a human lifespan. He died during the eruption of Mount Vesuvius before he could finish a final edit of his magnum opus. Even in his own era, it wasn't possible for one person to read all the books, learn all the things, and explain it all to the world.


This is an interesting development - not just for restorative prosthetics for people who have lost a limb - but ultimately for both robotics and more importantly for extending any person’s mind into a smart environment that includes robotics - autonomous or otherwise.
Of course, mind-controlled robots are still years away from consumer applications, McLoughlin says. At the moment, they are still too expensive, too bulky and too finicky to be used outside a laboratory setting. And there's no good way to control them without implanting electrodes in the brain.

Brain Implant Restores Sense Of Touch To Paralyzed Man

Twelve years ago, a car wreck took away Nathan Copeland's ability to control his hands or sense what his fingers were touching.

A few months ago, researchers at the University of Pittsburgh and the University of Pittsburgh Medical Center gave Copeland a new way to reach out and feel the world around him. It's a mind-controlled robotic arm that has pressure sensors in each fingertip that send signals directly to Copeland's brain.

The scientists published details of their work online Thursday in the journal Science Translational Medicine.
"It's a really weird sensation," Copeland, now 30, says in a video made shortly after he first tried the system. "Sometimes it feels, kind of, like electrical and sometimes it's more of a pressure." But he also describes many of the sensations coming from his robotic hand as "natural."


Here is a potential future of brain implants for repair - and maybe later for enhancement - They are also prime for accelerating technology evolution. The brain-machine interface or maybe the brain-brain interface as well. The images are worth the view.
Electronic brain interfaces like these could someday be crucial for people with neurological diseases such as Parkinson’s. The disease causes a group of neurons in one area of the brain to begin dying off, triggering uncontrollable tremors and shakes. Sending targeted electrical jolts to this area can help whip the living neurons back into shape and stop Parkinson’s symptoms.

Injectable Wires for Fixing the Brain

Novel treatments for neurological diseases might be possible with a flexible mesh that can prod individual brain cells.
In a basement laboratory at Harvard University, a few strands of thin wire mesh are undulating at the bottom of a cup of water, as if in a minuscule ribbon dance. The meshes—about the length of a pen cap—are able to do something unprecedented: once injected into the brain of a living mouse, they can safely stimulate individual neurons and measure the cells’ behavior for more than a year.

Charles Lieber, a Harvard chemist and nanomaterials pioneer, had a different idea: a conductive brain interface that mirrors the fine details of the brain itself. Just as neurons connect with each other in a network that has open spaces where proteins and fluids pass through, the crosshatches in Lieber’s bendable mesh electronics leave room for neurons to fit in, rather than being pushed to the side by a boxy foreign object. “This device effectively blurs the interface between a living system and a non-living system,” says Guosong Hong, a postdoc in Lieber’s lab.

The extremely flexible mesh, made of gold wires sandwiched between layers of a polymer, easily coils into a needle so it can be injected rather than implanted, avoiding a more extensive surgery. Part of the mesh sticks out though the brain and a hole in the skull so that it can be wired up to a computer that controls the electric jolts and measures the neurons’ activity. But eventually, Lieber says, the controls and power supply could be implanted in the body, as they are in today’s systems for deep brain stimulation.


Here’s another possible breakthrough - the onset of gene therapies.
This week, the company presented Phase III data at the American Academy of Ophthalmology’s annual meeting showing that 27 of 29 subjects who were virtually blind experienced an increase in vision function for more than a year following the procedure. No patients have had any serious adverse events associated with with the treatment, according to the company. High said the therapy is administered in a 45-minute surgical procedure under anesthesia, and a change in vision can be seen within 30 days.

Gene Therapy in U.S. Is On Track for Approval as Early as Next Year

Spark Therapeutics is within striking distance of a landmark green light from the FDA for its treatment for certain forms of blindness.
The first gene therapy for an inherited disease in the U.S. is closer to reality than ever before.
Spark Therapeutics is only the second company to pursue an application to the U.S. Food and Drug Administration for such a treatment, but it’s likely to be the first to hit the market.

Speaking at EmTech MIT 2016 on Tuesday, Katherine High, Spark’s cofounder, confirmed that the company is on track to launch its first product next year. The gene therapy, known as SPK-RPE65, targets mutations in people’s eyes that often lead to blindness. Currently, there are no drugs available to treat these disorders, known as inherited retinal dystrophies.

Spark plans to complete its FDA application by early 2017. If approved next year, the therapy would become the first for an inherited disease to be given the green light in the U.S. Two such gene therapies, Strimvelis and Glybera, have already been approved in Europe.


Here is a recent advance in Google’s deepmind - wonder where we will be in another decade.
"These models... can learn from examples like neural networks, but they can also store complex data like computers,"

Google's AI can now learn from its own memory independently

The DeepMind artificial intelligence (AI) being developed by Google's parent company, Alphabet, can now intelligently build on what's already inside its memory, the system's programmers have announced.

Their new hybrid system – called a Differential Neural Computer (DNC) – pairs a neural network with the vast data storage of conventional computers, and the AI is smart enough to navigate and learn from this external data bank.

What the DNC is doing is effectively combining external memory (like the external hard drive where all your photos get stored) with the neural network approach of AI, where a massive number of interconnected nodes work dynamically to simulate a brain.

At the heart of the DNC is a controller that constantly optimises its responses, comparing its results with the desired and correct ones. Over time, it's able to get more and more accurate, figuring out how to use its memory data banks at the same time.


The advent of the self-driving car may emerge sooner than we expect - here an account of trials in Singapore. There’s a 16 min video presentation as well.
...riders seem to experience a steep acceptance curve: a few minutes of nervousness quickly turns into complacency and eventually boredom. Still, they seem unnerved by driving behavior that strikes them as particularly unhuman. “Our first iteration of driverless cars kind of drove like trolleys on a track,” Iagnemma said. “This uncanny notion threw people off. We now appreciate that it’s vitally important.”

Novelty of Driverless Cars Wears Off Quickly for First-Timers

NuTonomy is conducting “the world’s largest, most expensive focus group” with self-driving taxis in Singapore.
How humanlike should self-driving cars be?

It’s a question that nuTonomy, a company that’s launched a self-driving taxi service in Singapore, is trying to answer. It appears that some version of the “uncanny valley”—a visceral negative response people feel to robots that seem almost human, but not human enough—also applies to automated vehicles.

“For better or worse, we have to bridge this divide between developing cars that drive by the book and cars that drive how you and I drive,” Karl Iagnemma, CEO of nuTonomy, said today at EmTech MIT 2016, in Cambridge, Massachusetts. “It’s an open question where on the continuum you want to drive—and it’s something we’re researching.”


The electric car will certainly enable a faster uptake of the driverless car - owners could let anyone in the family ‘use the car’. There’s been lots of talk about the social implications - this article discusses the potential economic implications.
Battery prices fell 35 percent last year and are on a trajectory to make electric vehicles as affordable as their gasoline counterparts over the next six years, according to Bloomberg New Energy Finance.

Batteries May Trip ‘Death Spiral’ in $3.4 Trillion Credit Market

Battery technologies starting to disrupt the electricity and automobile industries may also emerge as a trillion-dollar threat to credit markets, according to Fitch Ratings.
A quarter of outstanding global corporate debt, or as much as $3.4 trillion, is linked to the utility- and auto-industry bonds that rely on fossil fuel activities, the ratings agency wrote in a report published Tuesday.

Batteries have the potential to “tip the oil market from growth to contraction earlier than anticipated,” according to Fitch. “The narrative of oil’s decline is well rehearsed -- and if it starts to play out there is a risk that capital will act long before” and in the worst case result in an “investor death spiral.”

While hybrid and battery-only cars are making slow progress in denting sales of gasoline and diesel-driven vehicles, their growth trajectory may be grossly underestimated, said the authors of the study. The clean-energy research unit of Bloomberg LP estimates that battery-electric vehicles, which only run on power from a plug, will displace 13 million barrels of oil a day by 2040.


This is an interesting signal for the transformation of the operating room - and operating staff.
“Surgery is more than just hand-eye coördination,” said Garcia. “It’s about how well you perceive anatomy—tumors, nerves, and vessels—and your strategy [during an operation].” When asked to comment on Verb's observations about its technology, Intuitive's VP of global public affairs, Paige Bischoff, said that "many" of the 3.5 million da Vinci procedures to date were for cancer or complex surgeries.

The Recipe for the Perfect Robot Surgeon

Alphabet and Johnson & Johnson say dexterous robots equipped with artificial intelligence will make surgeons more productive.
Intuitive Surgical’s da Vinci robot is a technical marvel. Nearly half a million operations were performed in the U.S. by surgeons controlling its large, precise arms last year. One in four U.S. hospitals has one or more of the machines, which perform the majority of robotic surgeries worldwide and are credited with making minimally invasive surgery commonplace.

But when executives from Verb Surgical, a secretive joint venture between Alphabet and Johnson & Johnson, presented at the robotics industry conference RoboBusiness late last month, they made the da Vinci sound lame.

Intuitive’s machine, with an average selling price of $1.54 million, is too expensive and bulky, they grumbled. Pablo Garcia Kilroy, Verb’s vice president of research and technology, complained that while da Vinci is an impressive tool, it’s a dumb one that hasn’t widely transformed surgery. He said that while it enables surgeons to perform very delicate movements, it doesn’t assist with the cognitive skills that set the best surgeons apart.


Beyond enhanced and new senses with be new forms of sensibles anticipated by many science fiction writers. The 4 min video is gives a hint of what augmented reality will be able to offer.
If another Hololens user was to use their headset at the exhibition, they, too, would be able to see the floating text and even edit it — since Park made the sculpture interactive. That is something that could be a real hallmark of augmented reality in the future, with successive people adding to art exhibits over time.

DIGITAL GRAFFITI HIDDEN IN AUGMENTED REALITY AT WASHINGTON STATE ART MUSEUM

As the world starts to get its first taste of augmented reality technology through smartphones and developer headsets, not only do we have whole new virtual worlds to enjoy, but there is a new virtual veneer over the real one that can be exploited. No doubt it will eventually be used to put obnoxious advertising everywhere, but then artists can always hit back — with digital graffiti.

In fact, the first example of this has already made a real (virtual) world appearance, at the Bellevue Arts Museum in Washington state. Visitors with a Microsoft Hololens headset would have seen all of the same sculptures as everyone else, but also one that is entirely digital.

It is entitled Holographic Type Sculpture and was placed inside the museum by Microsoft user experience designer DongYhoon Park as a demonstration of what Hololens can do. As if getting a jump start on the question of whether graffiti is art, placing it within an art museum seemed quite apt.


Here’s a fascinating possibility for drones. There’s 2 min video.

Autonomous robotic garden drives itself around the city in search of sun

You've probably heard about autonomous cars by now - but what about a self-driving garden? University College London’s (UCL) Interactive Architecture Lab designed and built a nomadic, self-driving, and self-cultivating garden that they’ve named Hortum machina, B (the ‘B’ is short for Buckminster ‘Bucky’ Fuller). Encased in a large geodesic sphere, the modular garden is wrapped around a robotic aluminum core that monitors the plants’ responses to the environment and is able to propel the structure towards sunlight to best satisfy the garden’s needs.


And the role of drones is expanding into a global sensorium.
Argo is hardly the only fleet of scientific tools collecting data on ocean warming. But much of what scientists do know about the extent of the ocean’s heat-trapping ability is because of this international program, which has collected more than 1.5 million measurements.

How Much Heat Does the Ocean Trap? Robots Find Out

3,500 aquatic robots descend a mile below the surface and back, every 10 days
A fleet of robots, trolling the oceans and measuring their heat content, has revolutionized scientists’ ability to study how climate change is affecting the seas.
Now the aquatic machines called Argo floats are going into the deepest ocean abyss.

“We know a lot from Argo now that we have over a decade’s worth of temperature data,” said Gregory Johnson, a researcher with the National Oceanic and Atmospheric Administration’s Pacific Marine Environmental Laboratory. “We now know a lot about the upper ocean and how much heat it’s taking up, but we know less about the deep ocean heat uptake.”

A report released last month at the International Union for Conservation of Nature (IUCN) World Conservation Congress concluded that oceans have taken up 93 percent of the warming created by humans since the 1970s. To put that in perspective, if the heat generated between 1955 and 2010 had gone into the Earth’s atmosphere instead of the oceans, temperatures would have jumped by nearly 97 degrees Fahrenheit (E&ENews PM, Sept. 7).


Here is something Cool and Fun

Pixar and Khan Academy’s Free Online Course for Aspiring Animators

Up there with being an astronaut, comic book artist, or the President, there’s one job that your average kid would probably love to snag: Working at Pixar. Animation and Pixar enthusiasts of all ages, take note! Pixar in A Box (or PIAB) is a collaboration between Khan Academy and Pixar Animation Studios that focuses on real-Pixar-world applications of concepts you might usually encounter in the classroom. The latest batch of Pixar in a Box gives Makers a rare peek under the hood so that you can get a whiff of the warm engine that keeps those Pixar pistons pumping. There’s no need to register for the course, nor a requirement to watch the lessons in order — just head to their site and start exploring!


For Fun
For anyone familiar with the Canadian comedy program - ‘Just For Laughs - Gags’ and similar candid camera approaches to comedy here’s one that involves the emerging world of ‘smart machines’. This made me continuously laugh out loud. - Maybe that says more about me. :)

Bad Robots

Hidden camera show in which technology acts up, much to the bemusement of unsuspecting members of the public


Thursday, October 20, 2016

Friday Thinking 14 Oct. 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes

Articles
Can we open the black box of AI?


…. information technology changes the economy in three ways.
First, it dissolves the price mechanism. The economist Paul Romer pointed out in 1990 that information goods — if they can be copied and pasted infinitely, and used simultaneously without wear and tear — must fall in price under market conditions to a value close to zero.
...capitalism responds by inventing mechanisms that put a price on this zero-cost product. Monopolies, patents, WTO actions against countries that allow copyright theft, predatory practices common among big technology vendors.
But it means the essential market relationships are no longer organic. They have to be imposed each morning by lawyers and legislators.

The second impact of information is to automate work faster than new work can be invented.
Instead of high productivity we have low productivity plus what the anthropologist David Graeber calls millions of bullshit jobs — jobs that do not need to exist.
The growth engine is the central bank, pumping money into the system; and the state, propping up effectively insolvent banks. The typical entrepreneur is the migrant labour exploiter; the innovator someone who invents a way of extracting rent from low-wage people — like Uber or AirBnB.

Fortunately there is a third impact of info-tech. It has begun to create organisational and business models where collaboration is more important than price or value.
...as technology allowed it, we started to create organisations where the positive effects of networked collaboration were not captured by the market.
Wikipedia is the obvious example; or Linux; or increasingly the platform co-operatives where people are using networks and apps to fight back against the rent-seeking business models of firms like Uber and Airbnb.
… ask your tech people a more fundamental question: beneath the bonnet of our product, how much of what we use are tools commonly produced, outside the market sector, and maintained for free by a community of technicians? The answer is a lot.

In the 19th century the word for a strike was “taking tools out of the shop”. In the 20th century the management owned the tools. In the 21st century the tools are commonly owned, maintained and free.
The technology itself is in revolt against the monopolised ownership of intellectual property, and the private capture of externalities.

Another way of understanding privatisation is to say: how can we do public services as expensively as possible?

Paul Mason - Postcapitalism and the city

Keynote at Barcelona Initiative for Technological Sovereignty




Many probing and intelligent books have recently helped to make sense of psychological life in the digital age. Some of these analyze the unprecedented levels of surveillance of ordinary citizens, others the unprecedented collective choice of those citizens, especially younger ones, to expose their lives on social media; some explore the moods and emotions performed and observed on social networks, or celebrate the Internet as a vast aesthetic and commercial spectacle, even as a focus of spiritual awe, or decry the sudden expansion and acceleration of bureaucratic control.

The explicit common theme of these books is the newly public world in which practically everyone’s lives are newly accessible and offered for display. The less explicit theme is a newly pervasive, permeable, and transient sense of self, in which much of the experience, feeling, and emotion that used to exist within the confines of the self, in intimate relations, and in tangible unchanging objects—what William James called the “material self”—has migrated to the phone, to the digital “cloud,” and to the shape-shifting judgments of the crowd.

Every technological change that seems to threaten the integrity of the self also offers new ways to strengthen it. Plato warned about the act of writing—as Johannes Trithemius in the fifteenth century warned about printing—that it would shift memory and knowledge from the inward soul to mere outward markings. Yet the words preserved by writing and printing revealed psychological depths that had once seemed inaccessible, created new understandings of moral and intellectual life, and opened new freedoms of personal choice. Two centuries after Gutenberg, Rembrandt painted an old woman reading, her face illuminated by light shining from the Bible in her hands. Substitute a screen for the book, and that symbolic image is now literally accurate. But in the twenty-first century, as in Rembrandt’s seventeenth, the illumination we receive depends on the words we choose to read and the ways we choose to read them.

In the Depths of the Digital Age



One more ‘signal’ about the looming possibilities of a guaranteed livable income.

President Obama: We'll be debating unconditional free money 'over the next 10 or 20 years'

Speaking with Wired editor-in-chief Scott Dadich and MIT Media Lab director Joi Ito in a recent interview, President Barack Obama reaffirmed his belief that universal basic income would be harder to ignore in the coming decades.

UBI is a system of wealth distribution in which the government provides everyone with some money, regardless of income.

The money comes with no strings attached. People can use it however they choose, whether to repair a leaky roof or to go on vacation. Advocates say the system is a smart and straightforward way to lift people out of poverty.

A growing body of evidence suggests such a system might be necessary if artificial intelligence wipes out a huge chunk of jobs performed by humans. That is the future Obama wants to avoid, but he said the possibility warrants a debate on basic income.


The automation matra - whatever can be - will be - is more than the simple automation of the past. What we are facing in the emerging digital environment is what Kevin Kelly call ‘Cognification’ - to understand just how profoundly disruptive cognification is apply the following equation: take x (any profession, job, activity) ADD Artificial Intelligence driven by machine learning.
Take any X + AI = cognification.
The challenge for humans is the radical shift in how we learn, why we learn and how fast we learn - because with AI - once once instance of AI learns - all instances learn. For example - self-driving cars - one car has a near-miss and learns something new - now all self-driving cars have learned this.
This takes a great deal longer for humans - despite their mastery of the technologies of language and culture for the evolution and adoption of memes.
Big law firms are pouring money into AI as a way of automating tasks traditionally undertaken by junior lawyers. Many believe AI will allow lawyers to focus on complex, higher-value work. An example is Pinsent Masons, whose TermFrame system emulates the decision-making process of a human. It was developed by Orlando Conetta, the firm’s head of R&D, who has degrees in law and computer science and did an LLM in legal reasoning and AI. TermFrame guides lawyers through different types of work while connecting them to relevant templates, documents and precedents at the right moments. He says AI will not make lawyers extinct but “is just another category of technology which helps to solve the problem”.

Artificial intelligence disrupting the business of law

Firms are recognising that failure to invest in technology will hinder ability to compete in today’s legal market
Its traditional aversion to risk has meant the legal profession has not been in the vanguard of new technology. But it is seen as ripe for disruption — a view that is based not least on pressure from tech-savvy corporate clients questioning the size of their legal bills and wanting to reduce risk.

As more law firms become familiar with terms such as machine learning and data mining, they are creating tech-focused jobs like “head of research and development” or hiring coders or artificial intelligence (AI) experts.

Change is being driven not only by demand from clients but also by competition from accounting firms, which have begun to offer legal services and to use technology to do routine work. “Lawtech” start-ups, often set up by ex-lawyers and so-called because they use technology to streamline or automate routine aspects of legal work, are a threat too. Lawtech has been compared to fintech, where small, nimble tech companies are trying to disrupt the business models of established banks.
A study by Deloitte has suggested that technology is already leading to job losses in the UK legal sector, and some 114,000 jobs could be automated within 20 years.


This is a very short article - but well worth reading and consideration for anyone involved in education.
“Universities need to recognise that though the prize may today seem tiny next to their core business, things are only going in one direction,” Mr Nelson said. “The sooner they go through the organisational pain of putting digital first in every area, the sooner leadership can be established in a rapidly changing market.”

University digital learning systems ‘verging on embarrassing’

FutureLearn’s Simon Nelson says universities must go through ‘organisational pain’ of prioritising learning technology or suffer the consequences
Universities must embrace digital learning or face losing out to competitors, according to the head of the UK’s massive open online course platform.

FutureLearn chief executive Simon Nelson said that, while the campus-based degree would “always have its place”, there was “no room for complacency”.

In a lecture to the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA), he argued that institutions that did not grasp the potential of online learning would be overtaken by rival providers.

Mr Nelson claimed that Moocs were “finally moving into mainstream activity”, citing FutureLearn research that found that 5 per cent of UK adults had taken part in a short online course and that another 5 per cent planned to do so in the next 12 months.


The transformation of the education system is partly driven by the transformation of the ‘labor force’ and the accelerating change of work - as Kevin Kelly notes in a ‘Beta world’ we are ultimately all becoming eternal ‘Newbies’ forever having to forego a past mastery to learn a new domain-technology-knowledge-practice … etc.

'Gig' economy all right for some as up to 30% work this way

The "gig economy" suits Hannah Jones.
"I'm studying, so I can work when I want and for how long I want to. There aren't really any downsides for me at the moment," she says.
Hannah works for Deliveroo, one of the best-known companies in the business thanks to protests by drivers in the summer against proposed changes to the way they are paid.
She is part of a growing army of such workers.


For anyone interested in Virtual Reality VR - but are balking at the cost Google’s approach may be useful - as long as your smartphone is compatible with the Daydream View App.

R.I.P., Google Cardboard

Google’s new Pixel Phone and Daydream View headset go hand in hand for an accessible VR experience.
Google has added a new virtual reality headset to the mix, but this one has a very different feel from others on the market.

Daydream View, a $79 headset unveiled by the company on Tuesday, has the same basic design as many other mobile VR headsets. The front opens up to allow you to slide in a phone—either Google’s new Pixel phone or any of the incoming Daydream-compatible phones from Android partners. After you strap it to your head, you peer at the phone through two lenses that help create the impression of a 360-degree screen.

The first thing that sets it apart is the fact that it’s made from materials you’re likely to find in clothing—about as far as you can get from the plastic and foam construction of its competitors. The headset also comes with a remote. The palm-sized device has a clickable touch pad and two buttons, but can also be moved, pointed, and swung for motion-based interaction with the headset. Tapping on the side of Samsung’s Gear VR headset has never felt particularly comfortable or intuitive. Google Cardboard really only has one button, limiting what you can do. This one will make interaction much easier for games and scrolling through the headset’s user interface.


This is a wonderful illustration of what is possible in the domain of smartphones and Internet access - if this can happen in India - why not in North America and Europe? Or everywhere for that matter.

Datawind's $45 smartphone will come with free internet subscription in India

After taking India by storm with its ultra low-cost tablets, DataWind has plans to do something similar with smartphones.
DataWind has announced that it will launch three 4G LTE capable phones in India around Diwali festival season next month. The smartphones will be priced between Rs 3,000 ($45) and Rs 5,000 ($75).

The smartphones are just part of the deal as Datawind has also applied for license to become a virtual network operator. This will allow its smartphone users to surf the internet for cheap if they also buy its SIM cards.

Datawind plans to offer a data plan featuring unlimited browsing at Rs 99 ($1.5). "Moreover, we shall offer internet browsing for one year free of cost on our 4G handset," Datawind CEO, Suneet Singh Tuli said.
The cheapest model will carry 1GB of RAM and 8GB of internal storage. The other two variants will be more powerful with one sporting 2GB of RAM and 16GB of internal storage, whereas the other topping that with 3GB of RAM and 32GB of internal storage combination.

The company said it will make nominal profits on these phones - expecting to clock only 10 percent-margin on sales. The phones will be manufactured in company’s plants in Amritsar and Hyderabad.


Cognification accelerates - the significant question is what will the digital environment enable when cognification has been applied to everything?
“Complex robots with high levels of articulation can perform a task in many ways, so they generate a lot of data and require massive amounts of computing power,” says Masataka Osaki, vice president of worldwide operations at Nvidia.

Japanese Robotics Giant Gives Its Arms Some Brains

Fanuc, a company that produces robot arms for factories, is trying to get them to learn on the job.
The big, dumb, monotonous industrial robots found in many factories could soon be quite a bit smarter, thanks to the introduction of machine-learning skills that are moving out of research labs at a fast pace. Fanuc, one of the world’s largest makers of industrial robots, announced that it will work with Nvidia, a Silicon Valley chipmaker that specializes in artificial intelligence, to add learning capabilities to its products.

The deal is important because it shows how recent advances in AI are poised to overhaul the manufacturing industry. Today’s industrial bots are typically programmed to do a single job very precisely and accurately. But each time a production run changes, the robots then need to be reprogrammed from scratch, which takes time and technical expertise.

Machine learning offers a way to have a robot reprogram itself by learning how to do something through practice. The technique involved, called reinforcement learning, uses a large or deep neural network that controls a robotic arm’s movement and varies its behavior, reinforcing actions that lead it closer to an end goal, like picking up a particular object. And the process can also be sped up by having lots of robots work in concert and then sharing what they have learned. Although robots have become easier to program in recent years, their learning abilities have not advanced very much.


A great deal of virtual and real ‘ink’ (if that is really what we used these days) has been used to discuss (revile, fear, celebrate, anticipate, consider, etc.) the emergence of ubiquitous Artificial Intelligence - much less has ink has been used to discuss ‘artificial morality’ (as if humans could be considered as consistently moral). This is an interesting experiment that might be of interest. This enables ‘players’ to put themselves in the place of an AI.

Moral Machine

From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.

Recent scientific studies on machine ethics have raised awareness about the topic in the media and public discourse. This website aims to take the discussion further, by providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.


The speed at which self-driving and other autonomous intelligent agents are emerging is incredible. It’s almost impossible to really imagine where this and related technology will be in a decade - 2015.
“Today’s first public trials of driverless vehicles in our towns is a ground-breaking moment," Britain's business minister Greg Clark said.
“The global market for autonomous vehicles present huge opportunities for our automotive and technology firms and the research that underpins the technology and software will have applications way beyond autonomous vehicles,” he said.

Driverless vehicle to be tested on UK streets for the first time

A driverless vehicle carrying passengers will take to Britain's public roads for the first time on Tuesday, as part of trials aimed at paving the way for autonomous cars to hit the highways by the end of the decade.

The government is encouraging technology companies, carmakers and start-ups to develop and test their autonomous driving technologies in Britain, aiming to build an industry to serve a worldwide market which it forecasts could be worth around 900 billion pounds by 2025.

Earlier this year, it launched a consultation on changes to insurance rules and motoring regulations to allow driverless cars to be used by 2020 and said it would allow such vehicles to be tested on motorways from next year.

A pod - like a small two-seater car - developed by a company spun out from Oxford University will be tested in the southern English town of Milton Keynes on Tuesday, with organisers hoping the trials will feed vital information on how the vehicle interacts with pedestrians and other road-users.


And at minimum the next decade should see a vast transformation of not just energy geo-politics but of transportation.

Germany’s Bundesrat votes to ban the internal combustion engine by 2030

The resolution is non-binding, but it's still a powerful signal.
Is the tide turning for the internal combustion engine? In Germany, things are starting to look that way. This is the country that invented the technology, but late last week, the Bundesrat (the federal council of all 16 German states) voted to ban gasoline- and diesel-powered vehicles by 2030.

It's a strong statement in a nation where the auto industry is one of the largest sectors of the economy; Germany produces more automobiles than any other country in Europe and is the third largest in the world. The resolution passed by the Bundesrat calls on the European Commission (the executive arm of the European Union) to "evaluate the recent tax and contribution practices of Member States on their effectiveness in promoting zero-emission mobility," which many are taking to mean an end to the lower levels of tax currently levied on diesel fuel across Europe.


The acceleration of bioscience is stunning and like all great science continues to generate more questions than answers. This is an interesting article on the current state of gene sequencing. This is a great discussion.
The genome graph is only just starting to crack open its cocoon. Paten and his colleagues hope to release the first open-access genome graph, made up of over 1,000 people, within a year. A company called Seven Bridges rolled out a beta version of a proprietary graph earlier this month.
This map is not a guide to any city on Earth. It is a sketch of the human gene pool.

As DNA reveals its secrets, scientists are assembling a new picture of humanity

Sixteen years ago, two teams of scientists announced they had assembled the first rough draft of the entire human genome. If you wanted, you could read the whole thing — 3.2 billion units, known as base pairs.

Today, hundreds of thousands of people have had their genomes sequenced, and millions more will be completed in the next few years.

But as the numbers skyrocket, it’s becoming painfully clear that the original method that scientists used to compare genomes to each other — and to develop a better understanding of how our DNA influences our lives — is rapidly becoming obsolete. When scientists sequence a new genome, their reconstructions are far from perfect. And those imperfections sometimes cause geneticists to miss a mutation known to cause a disease. They can also make it harder for scientists to discover new links between genes and diseases.

Paten, a computational biologist at the University of California, Santa Cruz, belongs to a cadre of scientists who are building the tools to look at genomes in a new way: as a single network of DNA sequences, known as a genome graph.


This next article takes the idea of reuse to a whole new level - not just re-using but also reclaiming. This isn’t the only product being worked on.

How To Clean Water With Old Coffee Grounds

Italian researchers have figured out how to turn spent coffee grounds into a foam that can remove heavy metals from water
...a team of Italy-based researchers that has come up with an innovative way of reusing these spent coffee grounds. The team, at the Istituto Italiano di Tecnologia (IIT) in Genoa, is using coffee grounds to clean water, turning the grounds into a foam that can remove heavy metals like mercury.

“We actually take a waste and give it a second life,” says materials scientist Despina Fragouli, who authored a new study about the coffee discovery in the journal ACS Sustainable Chemistry and Engineering.

Fragouli’s team took spent coffee grounds from IIT’s cafeteria, dried and ground them to make the particles smaller. They then mixed the grounds with some silicon and sugar. Once hardened, they dipped it in water to melt away the sugar, which leaves behind a foam-like material.

This foam, which looks a bit like a chocolate sponge cake, is then placed in heavy metal-contaminated water and left to sit. Over a period of 30 hours, the coffee sponge sucks up almost all of the metals, thanks to special metal-attracting qualities of coffee itself. The sponge can then be washed and reused without losing functionality. The amount of silicon in the sponge is low enough that the entire product is biodegradable.


Domesticating biology isn’t limited to crafting DNA - this is a fascinating article about how to make natural entities create even better results.

Silkworms Spin Super-Silk After Eating Carbon Nanotubes and Graphene

The strong, conductive material could be used for wearable electronics and medical implants, researchers say
Silk—the stuff of lustrous, glamorous clothing—is very strong. Researchers now report a clever way to make the gossamer threads even stronger and tougher: by feeding silkworms graphene or single-walled carbon nanotubes (Nano Lett. 2016, DOI: 10.1021/acs.nanolett.6b03597). The reinforced silk produced by the silkworms could be used in applications such as durable protective fabrics, biodegradable medical implants, and eco-friendly wearable electronics, they say.

Researchers have previously added dyes, antimicrobial agents, conductive polymers, and nanoparticles to silk—either by treating spun silk with the additives or, in some cases, by directly feeding the additives to silkworms. Silkworms, the larvae of mulberry-eating silk moths, spin their threads from a solution of silk protein produced in their salivary glands.

In contrast to regular silk, the carbon-enhanced silks are twice as tough and can withstand at least 50% higher stress before breaking. The team heated the silk fibers at 1,050 °C to carbonize the silk protein and then studied their conductivity and structure. The modified silks conduct electricity, unlike regular silk. Raman spectroscopy and electron microscopy imaging showed that the carbon-enhanced silk fibers had a more ordered crystal structure due to the incorporated nanomaterials.


This article is a 12 min read and contribution to understanding both the ‘Blockchain’ of Bitcoin vintage and the emerging distributed ledger technology of Etherium of smart contract and Distributed Autonomous Organization lineage. All of these efforts remain in the early days of maturation. The concepts promise profound disruption to incumbents and established attractors of efficiency. However there is much work to do before they robust and ubiquitous. This is well worth the read for anyone interested in the ongoing development of this technology - which I think is inevitable in some for or another.
While Bitcoin is primarily a crypto-currency system, Ethereum can be viewed in broader terms as a crypto-legal system [8]. To be fair, there are some simplistic smart contracts that can be implemented with the Bitcoin network, and as a digital cash protocol, bitcoin transactions can be seen as simple smart contracts. However, Bitcoin was specifically designed to be a payment system, not a general purpose smart contract platform like Ethereum.

Ethereum For Everyone

Everyone should know about Ethereum — and this article will explain why. The goal here is to describe this groundbreaking software project in layman’s terms. A big-picture approach will be taken, analyzing Ethereum’s design and implementation with comparisons to legacy computer systems. Liberty will be taken to simplify some of the technical details, so emphasis can be placed on the fundamental architecture and socio-economic implications of this radical innovation. Ethereum is an ambitious open source endeavor that promises to change the world by revolutionizing the utility of the Internet [1]. There are far-reaching implications for engineering a better, more honest world. Ethereum is creating a “truth” protocol with unlimited flexibility, allowing anyone and everyone to interact peer-to-peer using the Internet as a trustworthy backbone. Although both will be covered in this post, I think the long-term potential opportunities outweigh the many immediate challenges facing this promising innovation platform.

Conclusion
If Ethereum succeeds, the potential to disrupt the status quo is vast. And if it doesn’t, another attempt to create a fully programmable blockchain will soon follow. The fields of law, regulation, finance, banking, governance, and many more, will be transformed beyond recognition. The full implications may take decades to realize, but there is no doubt a new wave is forming that will reshape human institutions for the better. In all walks of life, third-party middlemen of sometimes questionable integrity will be removed from the equation. Trust will no longer be a scarce commodity, and the Internet will form an honest bridge between anyone and everyone. P2P interaction via the Internet will be reliable and multi-facetted in complexity. Ethereum, Bitcoin, or other future iterations of these distributed consensus networks, will grant true net neutrality and promote cooperative endeavors while mitigating the malicious intent of bad actors.


There is too much to Know - no one can read everything important even in narrow domains of knowledge - unless it is completely new, and even then the edge is moving so fast that keeping up is hard. This article briefly reviews Six book that are focused on the impact of digital technology.

In the Depths of the Digital Age

Every technological revolution coincides with changes in what it means to be a human being, in the kinds of psychological borders that divide the inner life from the world outside. Those changes in sensibility and consciousness never correspond exactly with changes in technology, and many aspects of today’s digital world were already taking shape before the age of the personal computer and the smartphone. But the digital revolution suddenly increased the rate and scale of change in almost everyone’s lives. Elizabeth Eisenstein’s exhilaratingly ambitious historical study The Printing Press as an Agent of Change (1979) may overstate its argument that the press was the initiating cause of the great changes in culture in the early sixteenth century, but her book pointed to the many ways in which new means of communication can amplify slow, preexisting changes into an overwhelming, transforming wave.

In The Changing Nature of Man (1956), the Dutch psychiatrist J.H. van den Berg described four centuries of Western life, from Montaigne to Freud, as a long inward journey. The inner meanings of thought and actions became increasingly significant, while many outward acts became understood as symptoms of inner neuroses rooted in everyone’s distant childhood past; a cigar was no longer merely a cigar. A half-century later, at the start of the digital era in the late twentieth century, these changes reversed direction, and life became increasingly public, open, external, immediate, and exposed.
The reviewed books are:
Pressed for Time: The Acceleration of Life in Digital Capitalism
by Judy Wajcman
University of Chicago Press, 215 pp., $24.00

Exposed: Desire and Disobedience in the Digital Age
by Bernard E. Harcourt
Harvard University Press, 364 pp., $35.00

Magic and Loss: The Internet as Art
by Virginia Heffernan
Simon and Schuster, 263 pp., $26.00

Updating to Remain the Same: Habitual New Media
by Wendy Hui Kyong Chun
MIT Press, 264 pp., $32.00

Mood and Mobility: Navigating the Emotional Spaces of Digital Social Networks
by Richard Coyne
MIT Press, 378 pp., $35.00

Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up
by Philip N. Howard
Yale University Press, 320 pp., $28.00


This is a very long piece 120 pages (can be downloaded as a pdf) that is of interest to anyone looking for a sound methodology and approach for the critique of technology. If I have a criticism of this work - it would include the inexcusable omission of some fundamental thinkers, such as Kevin Kelly, Ray Kurzweil, Howard Rheingold, etc. and of course a some essential science fiction writers and scientist such as Bruce Sterling, David Brin, Vernor Vinge and others.
“This is a work of criticism. If it were literary criticism, everyone would immediately understand the underlying purpose is positive. A critic of literature examines a work, analyzing its features, evaluating its qualities, seeking a deeper appreciation that might be useful to other readers of the same text. In a similar way, critics of music, theater, and the arts have a valuable, well-established role, serving as a helpful bridge between artists and audiences. Criticism of technology, however, is not yet afforded the same glad welcome. Writers who venture beyond the most pedestrian, dreary conceptions of tools and uses to investigate ways in which technical forms are implicated in the basic patterns and problems of our culture are often greeted with the charge that they are merely ‘anti-technology’ or ‘blaming technology.’ All who have recently stepped forward as critics in this realm have been tarred with the same idiot brush, an expression of the desire to stop a much needed dialogue rather than enlarge it. If any readers want to see the the present work as ‘anti-technology,’ make the most of it. That is their topic, not mine.”
—Langdon Winner

Toward a Constructive Technology Criticism

Executive Summary
In this report, I draw on interviews with journalists and critics, as well as a broad reading of published work, to assess the current state of technology coverage and criticism in the popular discourse, and to offer some thoughts on how to move the critical enterprise forward. I find that what it means to cover technology is a moving target. Today, the technology beat focuses less on the technology itself and more on how technology intersects with and transforms everything readers care about—from politics to personal relationships. But as technology coverage matures, the distinctions between reporting and criticism are blurring. Even the most straightforward reporting plays a role in guiding public attention and setting agendas.

I further find that technology criticism is too narrowly defined. First, criticism carries negative connotations—that of criticizing with unfavorable opinions rather than critiquing to offer context and interpretation. Strongly associated with notions of progress, technology criticism today skews negative and nihilistic. Second, much of the criticism coming from people widely recognized as “critics” perpetuates these negative associations by employing problematic styles and tactics, and by exercising unreflexive assumptions and ideologies. As a result, many journalists and bloggers are reluctant to associate their work with criticism or identify themselves as critics. And yet I find a larger circle of journalists, bloggers, academics, and critics contributing to the public discourse about technology and addressing important questions by applying a variety of critical lenses to their work. Some of the most novel critiques about technology and Silicon Valley are coming from women and underrepresented minorities, but their work is seldom recognized in traditional critical venues. As a result, readers may miss much of the critical discourse about technology if they focus only on the work of a few, outspoken intellectuals.

Even if a wider set of contributions to the technology discourse is acknowledged, I find that technology criticism still lacks a clearly articulated, constructive agenda. Besides deconstructing, naming, and interpreting technological phenomena, criticism has the potential to assemble new insights and interpretations. In response to this finding, I lay out the elements of a constructive technology criticism that aims to bring stakeholders together in productive conversation rather than pitting them against each other. Constructive criticism poses alternative possibilities. It skews toward optimism, or at least toward an idea that future technological societies could be improved. Acknowledging the realities of society and culture, constructive criticism offers readers the tools and framings for thinking about their relationship to technology and their relationship to power. Beyond intellectual arguments, constructive criticism is embodied, practical, and accessible, and it offers frameworks for living with technology.


And another interesting piece on the ‘black box’ of AI - although one has to also grasp that no one understands consciousness - despite using it every day (or not as some would claim of some others). :)
...meeting an intelligent alien species whose eyes have receptors not just for the primary colours red, green and blue, but also for a fourth colour. It would be very difficult for humans to understand how the alien sees the world, and for the alien to explain it to us, he says. Computers will have similar difficulties explaining things to us, he says. “At some point, it's like explaining Shakespeare to a dog.”

Can we open the black box of AI?

Artificial intelligence is everywhere. But before scientists trust it, they first need to understand how machines learn.
... deciphering the black box has become exponentially harder and more urgent. The technology itself has exploded in complexity and application. Pomerleau, who now teaches robotics part-time at Carnegie Mellon, describes his little van-mounted system as “a poor man's version” of the huge neural networks being implemented on today's machines. And the technique of deep learning, in which the networks are trained on vast archives of big data, is finding commercial applications that range from self-driving cars to websites that recommend products on the basis of a user's browsing history.

It promises to become ubiquitous in science, too. Future radio-astronomy observatories will need deep learning to find worthwhile signals in their otherwise unmanageable amounts of data; gravitational-wave detectors will use it to understand and eliminate the tiniest sources of noise; and publishers will use it to scour and tag millions of research papers and books. Eventually, some researchers believe, computers equipped with deep learning may even display imagination and creativity. “You would just throw data at this machine, and it would come back with the laws of nature,” says Jean-Roch Vlimant, a physicist at the California Institute of Technology in Pasadena.

But such advances would make the black-box problem all the more acute. Exactly how is the machine finding those worthwhile signals, for example? And how can anyone be sure that it's right? How far should people be willing to trust deep learning? “I think we are definitely losing ground to these algorithms,” says roboticist Hod Lipson at Columbia University in New York City.