Thursday, October 20, 2016

Friday Thinking 14 Oct. 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Can we open the black box of AI?

…. information technology changes the economy in three ways.
First, it dissolves the price mechanism. The economist Paul Romer pointed out in 1990 that information goods — if they can be copied and pasted infinitely, and used simultaneously without wear and tear — must fall in price under market conditions to a value close to zero.
...capitalism responds by inventing mechanisms that put a price on this zero-cost product. Monopolies, patents, WTO actions against countries that allow copyright theft, predatory practices common among big technology vendors.
But it means the essential market relationships are no longer organic. They have to be imposed each morning by lawyers and legislators.

The second impact of information is to automate work faster than new work can be invented.
Instead of high productivity we have low productivity plus what the anthropologist David Graeber calls millions of bullshit jobs — jobs that do not need to exist.
The growth engine is the central bank, pumping money into the system; and the state, propping up effectively insolvent banks. The typical entrepreneur is the migrant labour exploiter; the innovator someone who invents a way of extracting rent from low-wage people — like Uber or AirBnB.

Fortunately there is a third impact of info-tech. It has begun to create organisational and business models where collaboration is more important than price or value. technology allowed it, we started to create organisations where the positive effects of networked collaboration were not captured by the market.
Wikipedia is the obvious example; or Linux; or increasingly the platform co-operatives where people are using networks and apps to fight back against the rent-seeking business models of firms like Uber and Airbnb.
… ask your tech people a more fundamental question: beneath the bonnet of our product, how much of what we use are tools commonly produced, outside the market sector, and maintained for free by a community of technicians? The answer is a lot.

In the 19th century the word for a strike was “taking tools out of the shop”. In the 20th century the management owned the tools. In the 21st century the tools are commonly owned, maintained and free.
The technology itself is in revolt against the monopolised ownership of intellectual property, and the private capture of externalities.

Another way of understanding privatisation is to say: how can we do public services as expensively as possible?

Paul Mason - Postcapitalism and the city

Keynote at Barcelona Initiative for Technological Sovereignty

Many probing and intelligent books have recently helped to make sense of psychological life in the digital age. Some of these analyze the unprecedented levels of surveillance of ordinary citizens, others the unprecedented collective choice of those citizens, especially younger ones, to expose their lives on social media; some explore the moods and emotions performed and observed on social networks, or celebrate the Internet as a vast aesthetic and commercial spectacle, even as a focus of spiritual awe, or decry the sudden expansion and acceleration of bureaucratic control.

The explicit common theme of these books is the newly public world in which practically everyone’s lives are newly accessible and offered for display. The less explicit theme is a newly pervasive, permeable, and transient sense of self, in which much of the experience, feeling, and emotion that used to exist within the confines of the self, in intimate relations, and in tangible unchanging objects—what William James called the “material self”—has migrated to the phone, to the digital “cloud,” and to the shape-shifting judgments of the crowd.

Every technological change that seems to threaten the integrity of the self also offers new ways to strengthen it. Plato warned about the act of writing—as Johannes Trithemius in the fifteenth century warned about printing—that it would shift memory and knowledge from the inward soul to mere outward markings. Yet the words preserved by writing and printing revealed psychological depths that had once seemed inaccessible, created new understandings of moral and intellectual life, and opened new freedoms of personal choice. Two centuries after Gutenberg, Rembrandt painted an old woman reading, her face illuminated by light shining from the Bible in her hands. Substitute a screen for the book, and that symbolic image is now literally accurate. But in the twenty-first century, as in Rembrandt’s seventeenth, the illumination we receive depends on the words we choose to read and the ways we choose to read them.

In the Depths of the Digital Age

One more ‘signal’ about the looming possibilities of a guaranteed livable income.

President Obama: We'll be debating unconditional free money 'over the next 10 or 20 years'

Speaking with Wired editor-in-chief Scott Dadich and MIT Media Lab director Joi Ito in a recent interview, President Barack Obama reaffirmed his belief that universal basic income would be harder to ignore in the coming decades.

UBI is a system of wealth distribution in which the government provides everyone with some money, regardless of income.

The money comes with no strings attached. People can use it however they choose, whether to repair a leaky roof or to go on vacation. Advocates say the system is a smart and straightforward way to lift people out of poverty.

A growing body of evidence suggests such a system might be necessary if artificial intelligence wipes out a huge chunk of jobs performed by humans. That is the future Obama wants to avoid, but he said the possibility warrants a debate on basic income.

The automation matra - whatever can be - will be - is more than the simple automation of the past. What we are facing in the emerging digital environment is what Kevin Kelly call ‘Cognification’ - to understand just how profoundly disruptive cognification is apply the following equation: take x (any profession, job, activity) ADD Artificial Intelligence driven by machine learning.
Take any X + AI = cognification.
The challenge for humans is the radical shift in how we learn, why we learn and how fast we learn - because with AI - once once instance of AI learns - all instances learn. For example - self-driving cars - one car has a near-miss and learns something new - now all self-driving cars have learned this.
This takes a great deal longer for humans - despite their mastery of the technologies of language and culture for the evolution and adoption of memes.
Big law firms are pouring money into AI as a way of automating tasks traditionally undertaken by junior lawyers. Many believe AI will allow lawyers to focus on complex, higher-value work. An example is Pinsent Masons, whose TermFrame system emulates the decision-making process of a human. It was developed by Orlando Conetta, the firm’s head of R&D, who has degrees in law and computer science and did an LLM in legal reasoning and AI. TermFrame guides lawyers through different types of work while connecting them to relevant templates, documents and precedents at the right moments. He says AI will not make lawyers extinct but “is just another category of technology which helps to solve the problem”.

Artificial intelligence disrupting the business of law

Firms are recognising that failure to invest in technology will hinder ability to compete in today’s legal market
Its traditional aversion to risk has meant the legal profession has not been in the vanguard of new technology. But it is seen as ripe for disruption — a view that is based not least on pressure from tech-savvy corporate clients questioning the size of their legal bills and wanting to reduce risk.

As more law firms become familiar with terms such as machine learning and data mining, they are creating tech-focused jobs like “head of research and development” or hiring coders or artificial intelligence (AI) experts.

Change is being driven not only by demand from clients but also by competition from accounting firms, which have begun to offer legal services and to use technology to do routine work. “Lawtech” start-ups, often set up by ex-lawyers and so-called because they use technology to streamline or automate routine aspects of legal work, are a threat too. Lawtech has been compared to fintech, where small, nimble tech companies are trying to disrupt the business models of established banks.
A study by Deloitte has suggested that technology is already leading to job losses in the UK legal sector, and some 114,000 jobs could be automated within 20 years.

This is a very short article - but well worth reading and consideration for anyone involved in education.
“Universities need to recognise that though the prize may today seem tiny next to their core business, things are only going in one direction,” Mr Nelson said. “The sooner they go through the organisational pain of putting digital first in every area, the sooner leadership can be established in a rapidly changing market.”

University digital learning systems ‘verging on embarrassing’

FutureLearn’s Simon Nelson says universities must go through ‘organisational pain’ of prioritising learning technology or suffer the consequences
Universities must embrace digital learning or face losing out to competitors, according to the head of the UK’s massive open online course platform.

FutureLearn chief executive Simon Nelson said that, while the campus-based degree would “always have its place”, there was “no room for complacency”.

In a lecture to the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA), he argued that institutions that did not grasp the potential of online learning would be overtaken by rival providers.

Mr Nelson claimed that Moocs were “finally moving into mainstream activity”, citing FutureLearn research that found that 5 per cent of UK adults had taken part in a short online course and that another 5 per cent planned to do so in the next 12 months.

The transformation of the education system is partly driven by the transformation of the ‘labor force’ and the accelerating change of work - as Kevin Kelly notes in a ‘Beta world’ we are ultimately all becoming eternal ‘Newbies’ forever having to forego a past mastery to learn a new domain-technology-knowledge-practice … etc.

'Gig' economy all right for some as up to 30% work this way

The "gig economy" suits Hannah Jones.
"I'm studying, so I can work when I want and for how long I want to. There aren't really any downsides for me at the moment," she says.
Hannah works for Deliveroo, one of the best-known companies in the business thanks to protests by drivers in the summer against proposed changes to the way they are paid.
She is part of a growing army of such workers.

For anyone interested in Virtual Reality VR - but are balking at the cost Google’s approach may be useful - as long as your smartphone is compatible with the Daydream View App.

R.I.P., Google Cardboard

Google’s new Pixel Phone and Daydream View headset go hand in hand for an accessible VR experience.
Google has added a new virtual reality headset to the mix, but this one has a very different feel from others on the market.

Daydream View, a $79 headset unveiled by the company on Tuesday, has the same basic design as many other mobile VR headsets. The front opens up to allow you to slide in a phone—either Google’s new Pixel phone or any of the incoming Daydream-compatible phones from Android partners. After you strap it to your head, you peer at the phone through two lenses that help create the impression of a 360-degree screen.

The first thing that sets it apart is the fact that it’s made from materials you’re likely to find in clothing—about as far as you can get from the plastic and foam construction of its competitors. The headset also comes with a remote. The palm-sized device has a clickable touch pad and two buttons, but can also be moved, pointed, and swung for motion-based interaction with the headset. Tapping on the side of Samsung’s Gear VR headset has never felt particularly comfortable or intuitive. Google Cardboard really only has one button, limiting what you can do. This one will make interaction much easier for games and scrolling through the headset’s user interface.

This is a wonderful illustration of what is possible in the domain of smartphones and Internet access - if this can happen in India - why not in North America and Europe? Or everywhere for that matter.

Datawind's $45 smartphone will come with free internet subscription in India

After taking India by storm with its ultra low-cost tablets, DataWind has plans to do something similar with smartphones.
DataWind has announced that it will launch three 4G LTE capable phones in India around Diwali festival season next month. The smartphones will be priced between Rs 3,000 ($45) and Rs 5,000 ($75).

The smartphones are just part of the deal as Datawind has also applied for license to become a virtual network operator. This will allow its smartphone users to surf the internet for cheap if they also buy its SIM cards.

Datawind plans to offer a data plan featuring unlimited browsing at Rs 99 ($1.5). "Moreover, we shall offer internet browsing for one year free of cost on our 4G handset," Datawind CEO, Suneet Singh Tuli said.
The cheapest model will carry 1GB of RAM and 8GB of internal storage. The other two variants will be more powerful with one sporting 2GB of RAM and 16GB of internal storage, whereas the other topping that with 3GB of RAM and 32GB of internal storage combination.

The company said it will make nominal profits on these phones - expecting to clock only 10 percent-margin on sales. The phones will be manufactured in company’s plants in Amritsar and Hyderabad.

Cognification accelerates - the significant question is what will the digital environment enable when cognification has been applied to everything?
“Complex robots with high levels of articulation can perform a task in many ways, so they generate a lot of data and require massive amounts of computing power,” says Masataka Osaki, vice president of worldwide operations at Nvidia.

Japanese Robotics Giant Gives Its Arms Some Brains

Fanuc, a company that produces robot arms for factories, is trying to get them to learn on the job.
The big, dumb, monotonous industrial robots found in many factories could soon be quite a bit smarter, thanks to the introduction of machine-learning skills that are moving out of research labs at a fast pace. Fanuc, one of the world’s largest makers of industrial robots, announced that it will work with Nvidia, a Silicon Valley chipmaker that specializes in artificial intelligence, to add learning capabilities to its products.

The deal is important because it shows how recent advances in AI are poised to overhaul the manufacturing industry. Today’s industrial bots are typically programmed to do a single job very precisely and accurately. But each time a production run changes, the robots then need to be reprogrammed from scratch, which takes time and technical expertise.

Machine learning offers a way to have a robot reprogram itself by learning how to do something through practice. The technique involved, called reinforcement learning, uses a large or deep neural network that controls a robotic arm’s movement and varies its behavior, reinforcing actions that lead it closer to an end goal, like picking up a particular object. And the process can also be sped up by having lots of robots work in concert and then sharing what they have learned. Although robots have become easier to program in recent years, their learning abilities have not advanced very much.

A great deal of virtual and real ‘ink’ (if that is really what we used these days) has been used to discuss (revile, fear, celebrate, anticipate, consider, etc.) the emergence of ubiquitous Artificial Intelligence - much less has ink has been used to discuss ‘artificial morality’ (as if humans could be considered as consistently moral). This is an interesting experiment that might be of interest. This enables ‘players’ to put themselves in the place of an AI.

Moral Machine

From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.

Recent scientific studies on machine ethics have raised awareness about the topic in the media and public discourse. This website aims to take the discussion further, by providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.

The speed at which self-driving and other autonomous intelligent agents are emerging is incredible. It’s almost impossible to really imagine where this and related technology will be in a decade - 2015.
“Today’s first public trials of driverless vehicles in our towns is a ground-breaking moment," Britain's business minister Greg Clark said.
“The global market for autonomous vehicles present huge opportunities for our automotive and technology firms and the research that underpins the technology and software will have applications way beyond autonomous vehicles,” he said.

Driverless vehicle to be tested on UK streets for the first time

A driverless vehicle carrying passengers will take to Britain's public roads for the first time on Tuesday, as part of trials aimed at paving the way for autonomous cars to hit the highways by the end of the decade.

The government is encouraging technology companies, carmakers and start-ups to develop and test their autonomous driving technologies in Britain, aiming to build an industry to serve a worldwide market which it forecasts could be worth around 900 billion pounds by 2025.

Earlier this year, it launched a consultation on changes to insurance rules and motoring regulations to allow driverless cars to be used by 2020 and said it would allow such vehicles to be tested on motorways from next year.

A pod - like a small two-seater car - developed by a company spun out from Oxford University will be tested in the southern English town of Milton Keynes on Tuesday, with organisers hoping the trials will feed vital information on how the vehicle interacts with pedestrians and other road-users.

And at minimum the next decade should see a vast transformation of not just energy geo-politics but of transportation.

Germany’s Bundesrat votes to ban the internal combustion engine by 2030

The resolution is non-binding, but it's still a powerful signal.
Is the tide turning for the internal combustion engine? In Germany, things are starting to look that way. This is the country that invented the technology, but late last week, the Bundesrat (the federal council of all 16 German states) voted to ban gasoline- and diesel-powered vehicles by 2030.

It's a strong statement in a nation where the auto industry is one of the largest sectors of the economy; Germany produces more automobiles than any other country in Europe and is the third largest in the world. The resolution passed by the Bundesrat calls on the European Commission (the executive arm of the European Union) to "evaluate the recent tax and contribution practices of Member States on their effectiveness in promoting zero-emission mobility," which many are taking to mean an end to the lower levels of tax currently levied on diesel fuel across Europe.

The acceleration of bioscience is stunning and like all great science continues to generate more questions than answers. This is an interesting article on the current state of gene sequencing. This is a great discussion.
The genome graph is only just starting to crack open its cocoon. Paten and his colleagues hope to release the first open-access genome graph, made up of over 1,000 people, within a year. A company called Seven Bridges rolled out a beta version of a proprietary graph earlier this month.
This map is not a guide to any city on Earth. It is a sketch of the human gene pool.

As DNA reveals its secrets, scientists are assembling a new picture of humanity

Sixteen years ago, two teams of scientists announced they had assembled the first rough draft of the entire human genome. If you wanted, you could read the whole thing — 3.2 billion units, known as base pairs.

Today, hundreds of thousands of people have had their genomes sequenced, and millions more will be completed in the next few years.

But as the numbers skyrocket, it’s becoming painfully clear that the original method that scientists used to compare genomes to each other — and to develop a better understanding of how our DNA influences our lives — is rapidly becoming obsolete. When scientists sequence a new genome, their reconstructions are far from perfect. And those imperfections sometimes cause geneticists to miss a mutation known to cause a disease. They can also make it harder for scientists to discover new links between genes and diseases.

Paten, a computational biologist at the University of California, Santa Cruz, belongs to a cadre of scientists who are building the tools to look at genomes in a new way: as a single network of DNA sequences, known as a genome graph.

This next article takes the idea of reuse to a whole new level - not just re-using but also reclaiming. This isn’t the only product being worked on.

How To Clean Water With Old Coffee Grounds

Italian researchers have figured out how to turn spent coffee grounds into a foam that can remove heavy metals from water
...a team of Italy-based researchers that has come up with an innovative way of reusing these spent coffee grounds. The team, at the Istituto Italiano di Tecnologia (IIT) in Genoa, is using coffee grounds to clean water, turning the grounds into a foam that can remove heavy metals like mercury.

“We actually take a waste and give it a second life,” says materials scientist Despina Fragouli, who authored a new study about the coffee discovery in the journal ACS Sustainable Chemistry and Engineering.

Fragouli’s team took spent coffee grounds from IIT’s cafeteria, dried and ground them to make the particles smaller. They then mixed the grounds with some silicon and sugar. Once hardened, they dipped it in water to melt away the sugar, which leaves behind a foam-like material.

This foam, which looks a bit like a chocolate sponge cake, is then placed in heavy metal-contaminated water and left to sit. Over a period of 30 hours, the coffee sponge sucks up almost all of the metals, thanks to special metal-attracting qualities of coffee itself. The sponge can then be washed and reused without losing functionality. The amount of silicon in the sponge is low enough that the entire product is biodegradable.

Domesticating biology isn’t limited to crafting DNA - this is a fascinating article about how to make natural entities create even better results.

Silkworms Spin Super-Silk After Eating Carbon Nanotubes and Graphene

The strong, conductive material could be used for wearable electronics and medical implants, researchers say
Silk—the stuff of lustrous, glamorous clothing—is very strong. Researchers now report a clever way to make the gossamer threads even stronger and tougher: by feeding silkworms graphene or single-walled carbon nanotubes (Nano Lett. 2016, DOI: 10.1021/acs.nanolett.6b03597). The reinforced silk produced by the silkworms could be used in applications such as durable protective fabrics, biodegradable medical implants, and eco-friendly wearable electronics, they say.

Researchers have previously added dyes, antimicrobial agents, conductive polymers, and nanoparticles to silk—either by treating spun silk with the additives or, in some cases, by directly feeding the additives to silkworms. Silkworms, the larvae of mulberry-eating silk moths, spin their threads from a solution of silk protein produced in their salivary glands.

In contrast to regular silk, the carbon-enhanced silks are twice as tough and can withstand at least 50% higher stress before breaking. The team heated the silk fibers at 1,050 °C to carbonize the silk protein and then studied their conductivity and structure. The modified silks conduct electricity, unlike regular silk. Raman spectroscopy and electron microscopy imaging showed that the carbon-enhanced silk fibers had a more ordered crystal structure due to the incorporated nanomaterials.

This article is a 12 min read and contribution to understanding both the ‘Blockchain’ of Bitcoin vintage and the emerging distributed ledger technology of Etherium of smart contract and Distributed Autonomous Organization lineage. All of these efforts remain in the early days of maturation. The concepts promise profound disruption to incumbents and established attractors of efficiency. However there is much work to do before they robust and ubiquitous. This is well worth the read for anyone interested in the ongoing development of this technology - which I think is inevitable in some for or another.
While Bitcoin is primarily a crypto-currency system, Ethereum can be viewed in broader terms as a crypto-legal system [8]. To be fair, there are some simplistic smart contracts that can be implemented with the Bitcoin network, and as a digital cash protocol, bitcoin transactions can be seen as simple smart contracts. However, Bitcoin was specifically designed to be a payment system, not a general purpose smart contract platform like Ethereum.

Ethereum For Everyone

Everyone should know about Ethereum — and this article will explain why. The goal here is to describe this groundbreaking software project in layman’s terms. A big-picture approach will be taken, analyzing Ethereum’s design and implementation with comparisons to legacy computer systems. Liberty will be taken to simplify some of the technical details, so emphasis can be placed on the fundamental architecture and socio-economic implications of this radical innovation. Ethereum is an ambitious open source endeavor that promises to change the world by revolutionizing the utility of the Internet [1]. There are far-reaching implications for engineering a better, more honest world. Ethereum is creating a “truth” protocol with unlimited flexibility, allowing anyone and everyone to interact peer-to-peer using the Internet as a trustworthy backbone. Although both will be covered in this post, I think the long-term potential opportunities outweigh the many immediate challenges facing this promising innovation platform.

If Ethereum succeeds, the potential to disrupt the status quo is vast. And if it doesn’t, another attempt to create a fully programmable blockchain will soon follow. The fields of law, regulation, finance, banking, governance, and many more, will be transformed beyond recognition. The full implications may take decades to realize, but there is no doubt a new wave is forming that will reshape human institutions for the better. In all walks of life, third-party middlemen of sometimes questionable integrity will be removed from the equation. Trust will no longer be a scarce commodity, and the Internet will form an honest bridge between anyone and everyone. P2P interaction via the Internet will be reliable and multi-facetted in complexity. Ethereum, Bitcoin, or other future iterations of these distributed consensus networks, will grant true net neutrality and promote cooperative endeavors while mitigating the malicious intent of bad actors.

There is too much to Know - no one can read everything important even in narrow domains of knowledge - unless it is completely new, and even then the edge is moving so fast that keeping up is hard. This article briefly reviews Six book that are focused on the impact of digital technology.

In the Depths of the Digital Age

Every technological revolution coincides with changes in what it means to be a human being, in the kinds of psychological borders that divide the inner life from the world outside. Those changes in sensibility and consciousness never correspond exactly with changes in technology, and many aspects of today’s digital world were already taking shape before the age of the personal computer and the smartphone. But the digital revolution suddenly increased the rate and scale of change in almost everyone’s lives. Elizabeth Eisenstein’s exhilaratingly ambitious historical study The Printing Press as an Agent of Change (1979) may overstate its argument that the press was the initiating cause of the great changes in culture in the early sixteenth century, but her book pointed to the many ways in which new means of communication can amplify slow, preexisting changes into an overwhelming, transforming wave.

In The Changing Nature of Man (1956), the Dutch psychiatrist J.H. van den Berg described four centuries of Western life, from Montaigne to Freud, as a long inward journey. The inner meanings of thought and actions became increasingly significant, while many outward acts became understood as symptoms of inner neuroses rooted in everyone’s distant childhood past; a cigar was no longer merely a cigar. A half-century later, at the start of the digital era in the late twentieth century, these changes reversed direction, and life became increasingly public, open, external, immediate, and exposed.
The reviewed books are:
Pressed for Time: The Acceleration of Life in Digital Capitalism
by Judy Wajcman
University of Chicago Press, 215 pp., $24.00

Exposed: Desire and Disobedience in the Digital Age
by Bernard E. Harcourt
Harvard University Press, 364 pp., $35.00

Magic and Loss: The Internet as Art
by Virginia Heffernan
Simon and Schuster, 263 pp., $26.00

Updating to Remain the Same: Habitual New Media
by Wendy Hui Kyong Chun
MIT Press, 264 pp., $32.00

Mood and Mobility: Navigating the Emotional Spaces of Digital Social Networks
by Richard Coyne
MIT Press, 378 pp., $35.00

Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up
by Philip N. Howard
Yale University Press, 320 pp., $28.00

This is a very long piece 120 pages (can be downloaded as a pdf) that is of interest to anyone looking for a sound methodology and approach for the critique of technology. If I have a criticism of this work - it would include the inexcusable omission of some fundamental thinkers, such as Kevin Kelly, Ray Kurzweil, Howard Rheingold, etc. and of course a some essential science fiction writers and scientist such as Bruce Sterling, David Brin, Vernor Vinge and others.
“This is a work of criticism. If it were literary criticism, everyone would immediately understand the underlying purpose is positive. A critic of literature examines a work, analyzing its features, evaluating its qualities, seeking a deeper appreciation that might be useful to other readers of the same text. In a similar way, critics of music, theater, and the arts have a valuable, well-established role, serving as a helpful bridge between artists and audiences. Criticism of technology, however, is not yet afforded the same glad welcome. Writers who venture beyond the most pedestrian, dreary conceptions of tools and uses to investigate ways in which technical forms are implicated in the basic patterns and problems of our culture are often greeted with the charge that they are merely ‘anti-technology’ or ‘blaming technology.’ All who have recently stepped forward as critics in this realm have been tarred with the same idiot brush, an expression of the desire to stop a much needed dialogue rather than enlarge it. If any readers want to see the the present work as ‘anti-technology,’ make the most of it. That is their topic, not mine.”
—Langdon Winner

Toward a Constructive Technology Criticism

Executive Summary
In this report, I draw on interviews with journalists and critics, as well as a broad reading of published work, to assess the current state of technology coverage and criticism in the popular discourse, and to offer some thoughts on how to move the critical enterprise forward. I find that what it means to cover technology is a moving target. Today, the technology beat focuses less on the technology itself and more on how technology intersects with and transforms everything readers care about—from politics to personal relationships. But as technology coverage matures, the distinctions between reporting and criticism are blurring. Even the most straightforward reporting plays a role in guiding public attention and setting agendas.

I further find that technology criticism is too narrowly defined. First, criticism carries negative connotations—that of criticizing with unfavorable opinions rather than critiquing to offer context and interpretation. Strongly associated with notions of progress, technology criticism today skews negative and nihilistic. Second, much of the criticism coming from people widely recognized as “critics” perpetuates these negative associations by employing problematic styles and tactics, and by exercising unreflexive assumptions and ideologies. As a result, many journalists and bloggers are reluctant to associate their work with criticism or identify themselves as critics. And yet I find a larger circle of journalists, bloggers, academics, and critics contributing to the public discourse about technology and addressing important questions by applying a variety of critical lenses to their work. Some of the most novel critiques about technology and Silicon Valley are coming from women and underrepresented minorities, but their work is seldom recognized in traditional critical venues. As a result, readers may miss much of the critical discourse about technology if they focus only on the work of a few, outspoken intellectuals.

Even if a wider set of contributions to the technology discourse is acknowledged, I find that technology criticism still lacks a clearly articulated, constructive agenda. Besides deconstructing, naming, and interpreting technological phenomena, criticism has the potential to assemble new insights and interpretations. In response to this finding, I lay out the elements of a constructive technology criticism that aims to bring stakeholders together in productive conversation rather than pitting them against each other. Constructive criticism poses alternative possibilities. It skews toward optimism, or at least toward an idea that future technological societies could be improved. Acknowledging the realities of society and culture, constructive criticism offers readers the tools and framings for thinking about their relationship to technology and their relationship to power. Beyond intellectual arguments, constructive criticism is embodied, practical, and accessible, and it offers frameworks for living with technology.

And another interesting piece on the ‘black box’ of AI - although one has to also grasp that no one understands consciousness - despite using it every day (or not as some would claim of some others). :)
...meeting an intelligent alien species whose eyes have receptors not just for the primary colours red, green and blue, but also for a fourth colour. It would be very difficult for humans to understand how the alien sees the world, and for the alien to explain it to us, he says. Computers will have similar difficulties explaining things to us, he says. “At some point, it's like explaining Shakespeare to a dog.”

Can we open the black box of AI?

Artificial intelligence is everywhere. But before scientists trust it, they first need to understand how machines learn.
... deciphering the black box has become exponentially harder and more urgent. The technology itself has exploded in complexity and application. Pomerleau, who now teaches robotics part-time at Carnegie Mellon, describes his little van-mounted system as “a poor man's version” of the huge neural networks being implemented on today's machines. And the technique of deep learning, in which the networks are trained on vast archives of big data, is finding commercial applications that range from self-driving cars to websites that recommend products on the basis of a user's browsing history.

It promises to become ubiquitous in science, too. Future radio-astronomy observatories will need deep learning to find worthwhile signals in their otherwise unmanageable amounts of data; gravitational-wave detectors will use it to understand and eliminate the tiniest sources of noise; and publishers will use it to scour and tag millions of research papers and books. Eventually, some researchers believe, computers equipped with deep learning may even display imagination and creativity. “You would just throw data at this machine, and it would come back with the laws of nature,” says Jean-Roch Vlimant, a physicist at the California Institute of Technology in Pasadena.

But such advances would make the black-box problem all the more acute. Exactly how is the machine finding those worthwhile signals, for example? And how can anyone be sure that it's right? How far should people be willing to trust deep learning? “I think we are definitely losing ground to these algorithms,” says roboticist Hod Lipson at Columbia University in New York City.

No comments:

Post a Comment