Thursday, May 9, 2019

Friday Thinking 10 May 2019

Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning.
Work that engages our whole self becomes play that works.
Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  
In the 21st century - the planet is the little school house in the galaxy.
Citizenship is the battlefield of the 21st  Century

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:



It is only juvenile intelligence that analyzes things and arrives at conclusions. If your intelligence is sufficiently evolved and mature, you realize that the more you analyze, the further away you are from any conclusion.

Sadhguru - "Inner Engineering: A Yogi’s Guide to Joy”




The past few decades have increasingly highlighted that much of the activity in sensory networks is intrinsically generated, rather than driven by external stimuli. Compare the activity in the visual cortex of an animal in complete darkness with that of an animal looking around, and it’s difficult to tell the two apart. Even in the absence of light, sets of neurons in the cortex begin to fire together, either at the same time or in predictable waves. This correlated firing persists as a so-called metastable state for anywhere from a few hundred milliseconds to a few seconds, and then the firing pattern shifts to another configuration. The metastability, or tendency to hop between transient states, continues after a stimulus is introduced, but some states tend to arise more often for a particular stimulus and are therefore thought of as “coding states.”

It also highlights the need to move away from focusing on single neurons that respond to particular cues, and toward making internal states and dynamics more explicit in our understanding of sensory networks — even for the most basic sensory stimuli. “It’s much easier to say that a neuron increases its firing rate,” said Anan Moran, a neurobiologist at Tel Aviv University in Israel. But to understand how organisms work, “you cannot account only for the stimulus, but also for the internal state,” he added. “And this means that our previous [understanding of] the mechanism used by the brain to achieve perception and action and so on needs to be reevaluated.”

Brains Speed Up Perception by Guessing What’s Next




The last 40 years of research across multiple scientific disciplines has proven, with certainty, that homo economicus does not exist. Outside of economic models, this is simply not how real humans behave. Rather, Homo sapiens have evolved to be other-regarding, reciprocal, heuristic, and intuitive moral creatures. We can be selfish, yes—even cruel. But it is our highly evolved prosocial nature—our innate facility for cooperation, not competition—that has enabled our species to dominate the planet, and to build such an extraordinary—and extraordinarily complex—quality of life. Pro-sociality is our economic super power.

Economists are not wrong when they attribute the material advances of modernity to market capitalism’s genius for self-organizing an increasingly complex and intricate division of knowledge, knowhow, and labor. But it’s important to recognize that the division of labor was not invented in the pin factories of Adam Smith’s eighteenth century Scotland; at some level, it has been a defining feature of all human societies since at least the cognitive revolution. Even our least complex societies, small bands of hunter-gatherers, are characterized by a division of labor—hunting and gathering—if largely along gender lines. The division of labor is a trait that is universal to our prosocial species.

Viewed through this prosocial lens, we can see that the highly specialized division of labor that characterizes our modern economy was not made possible by market capitalism. Rather, market capitalism was made possible by our fundamentally prosocial facility for cooperation, which is all the division of labor really is.

This dispute over behavioral models has profound non-academic consequences. Many economists, while acknowledging its flaws, still defend homo-economicus as a useful fiction—a tool for modeling and understanding the economic world. But it is much more than just an economic model. It is also a story we tell ourselves about ourselves that gives both permission and encouragement to some of the worst excesses of modern capitalism, and of contemporary moral and social life.

How to Destroy Neoliberalism: Kill ‘Homo Economicus’




the culture industries are very much caught up in the search for monopoly rent. It’s interesting that they’re called “industries” these days, which means that there’s a commodification of culture and an attempt to commodify the cultural commons and even commodify history, which is an astonishing process.

Capitalism and the Urban Struggle




This is a very important signal from Kevin Kelly - signalling the need to re-imagine a more appropriate economic, governance, and political framework for understanding the fundamental ‘anti-rival’ nature of information (including data) as a global commons. The need to resit and enclosure movement of knowledge, the Internet and the emerging digital environment.

Data Manifesto

1) Data cannot be owned. By anybody.

2) The natural habitat of data is in the commons. It is born in the commons, and will return to the commons, even if it is granted temporary monopolies. The longer it spends in the commons, the better.

3) Data is a shared resource, that only exists in relationship to its sources and substrates.

4) Any party that touches or generates a bit of data has rights and responsibilities about that data.

5) Rights always have corresponding responsibilities.

6) Control of data is both a right and responsibility that is always shared.

7) Privacy is a misunderstanding that does not apply to data.

8) Data is made more valuable by being connected to other data. Solitary data is worthless.

9) Data is made more valuable by moving. Storage is weak because it halts, “Movage” is better.

10) Both directions of movage are important — where it came from, where it goes.

11) The meta data about where data goes is as important as where it came from.

12) Ensuring bi-directionality, the symmetry of movage, is important to the robustness of the data net.

13) Data can generate infinite derivative data (meta data) but they all follow the same rules.

14) When new data is generated from data (meta data) the rights and responsibilities of the first generation proceed to the second.

15) At the same time, meta data has claims of rights and responsibilities upon the root data.

16) Data can be expensive or free, determined by the market. It has no inherent value.

17) Data is easy to replicate in time (free copies) and difficult to replicate over time (digital decay). The only way to carry data into the future is if it is exercised (moved) by those who care about it.

18) Like all other shared resources, data can suffer from the tragedy of the commons, and this commons must be protected by governments.

19) As the number of entities, including meta data, touching a bit of data expands over time, with claims of rights and responsibilities, some values will dilute and some will amplify.

20) To manage the web of relationships, rights and responsibilities of data will require technological and social tools that don’t exist yet.


This is an interesting 7 min video signal of how an undercurrent of digital activism is working to provide alternatives to for-profit social media,

Distributed social media - Mastodon & Fediverse Explained

Mastodon is a "federated" social network that works like Twitter. It puts the control of data into the user's hands, not in a single corporation.

Mastodon uses ActivityPub to make sure that each Mastodon instance can reach the others.

ActivityPub is also implemented by other applications such as PeerTube and Plume. This is what makes up the *Fediverse*: a collection of social networks that function as a single one.


The future challenges including climate change - requires multi-dimensional innovations and transdisciplinary work. The focus must include technologies, social, governance and economic institutional innovations. This is a long article signalling one such approach.

Fix the broken food system in three steps

Build a global network for mapping, modelling and managing agriculture, biodiversity, trade and nutrition, argue Guido Schmidt-Traub, Michael Obersteiner and Aline Mosnier.
Land use and food production are not meeting people’s needs. Agriculture destroys forests and biodiversity, squanders water and releases one-quarter of global greenhouse-gas emissions. Yet one-third of food is wasted, 800 million people remain undernourished, 2 billion are deficient in micronutrients, and obesity is on the rise. These figures will worsen as the planet warms, soils degrade and the global population grows, urbanizes and consumes more.

Threats to agriculture, climate and health are entwined. Yet policies treat each in isolation and are misaligned. National strategies for mitigating climate change pay scant attention to biodiversity and food security. The European Union’s Common Agricultural Policy includes steps to reduce emissions from livestock and fertilizers, for example, but offers no way of improving diets.

What is needed are strategies for managing land-use and food systems together. These would consider links between agriculture, water, pollution, biodiversity, diets and greenhouse-gas emissions. Each sector and country can tailor solutions. But global coordination, learning and knowledge-sharing will also be necessary to ensure that the net result is sustainable and resilient, and in line with the Sustainable Development Goals (SDGs) and the 2015 Paris climate agreement.

Here we describe three steps for developing such integrated approaches.


From the brain that changes itself - to AI generated visuals that could be tailored to develop specific neural capabilities or treat specific neural conditions - this is a signal worth tracking.
Viewing any image triggers some kind of neural activity in a brain. But neuroscientist Kohitij Kar of MIT and colleagues wanted to see whether the AI’s deliberately designed images could induce specific neural responses of the team’s choosing.

An AI used art to control monkeys’ brain cells

Such tailored regulation of neural activity could lead to new types of neuroscience experiments
New artwork created by artificial intelligence does weird things to the primate brain.
When shown to macaques, AI-generated images purposefully caused nerve cells in the monkeys’ brains to fire more than pictures of real-world objects. The AI could also design patterns that activated specific neurons while suppressing others, researchers report in the May 3 Science.

This unprecedented control over neural activity using images may lead to new kinds of neuroscience experiments or treatments for mental disorders. The AI’s ability to play the primate brain like a fiddle also offers insight into how closely AIs can emulate brain function.

The AI responsible for the new mind-bending images is an artificial neural network — a computer model composed of virtual neurons — modeled after the ventral stream. This is a neural pathway in the brain involved in vision (SN Online: 8/12/09). The AI learned to “see” by studying a library of about 1.3 million labeled images. Researchers then instructed the AI to design pictures that would affect specific ventral stream neurons in the brain.


The promise of artificial photosynthesis could provide us ways to capture carbon in addition to other technologies aimed at reducing our production of carbon. A hopeful weak signal ready for trial.

World’s first ‘BioSolar Leaf’ to tackle air pollution in White City

Imperial College London is to collaborate with startup Arborea to develop pioneering ‘BioSolar Leaf’ technology to improve air quality in White City.
The technology, which is the first of its kind in the world, purifies the air through the photosynthesis of microscopic plants, removing greenhouse gases from the environment whilst generating breathable oxygen.

Arborea have developed an innovative cultivation system which facilitates the growth of tiny plant-life - such as microalgae, diatoms and phytoplankton - on large solar panel-like structures. These can then be installed on land, buildings and other developments to improve surrounding air quality.

The team say that Arborea’s cultivation system can remove carbon dioxide and produce breathable oxygen at a rate equivalent to a hundred trees from the surface area of just a single tree.

The system also produces a sustainable source of organic biomass from which Arborea extracts nutritious food additives for plant-based food products.


A signal for contributing to how we meet the challenges of climate change and terra-forming. There’s a 2 min video.
“We now have a case confirmed of what species we can plant and in what conditions,” Irina Fedorenko, co-founder of Biocarbon Engineering, told Fast Company. “We are now ready to scale up our planting and replicate this success.”

These tree-planting drones are firing ‘seed missiles’ into the ground. Less than a year later, they’re already 20 inches tall.

In September 2018, a project in Myanmar used drones to fire “seed missiles” into remote areas of the country where trees were not growing. Less than a year later, thousands of those seed missiles have sprouted into 20-inch mangrove saplings that could literally be a case study in how technology can be used to innovate our way out of the climate change crisis.

According to Fedoranko, just two operators could send out a mini-fleet of seed missile planting drones that could plant 400,000 trees a day -- a number that quite possibly could make massive headway in combating the effects of manmade climate change.


This is a very interesting signal related to new materials for ecological consumption. There’s a 1 min video showing these in action.

Edible water bottles

London-based tech startup Skipping Rocks Lab wants to make packaging disappear. They have created a water bottle you can eat.

Ooho! is a spherical packaging made of seaweed, entirely natural and biodegradable. Inspired by egg yolks, water is trapped inside layers made up of brown algae and calcium chloride and can be drunk when those membranes are punctured. The membranes have been compared to the skin of an apple, as they can then be eaten or thrown away.

The goal is to create a waste-free alternative to plastic bottles and cups with material that cheaper than plastic and can encapsulate any beverage including water, soft drinks, spirits and cosmetics.


When will the internal combustion engine become displaced as the most affordable choice? It seems to be getting closer.
Analysts have for several years been using a sort of shorthand for describing an electric vehicle battery: half the car’s total cost. That figure, and that shorthand, has changed in just a few years. For a midsize U.S. car in 2015, the battery made up more than 57 percent of the total cost. This year, it’s 33 percent. By 2025, the battery will be only 20 percent of total vehicle cost.

Electric Car Price Tag Shrinks Along With Battery Cost

Choosing an electric car over its combustion-engine equivalent will soon be just a matter of taste, not a matter of cost.
Every year, BloombergNEF’s advanced transport team builds a bottom-up analysis of the cost of purchasing an electric vehicle and compares it to the cost of a combustion-engine vehicle of the same size. The crossover point — when electric vehicles become cheaper than their combustion-engine equivalents — will be a crucial moment for the EV market. All things being equal, upfront price parity makes a buyer’s decision to buy an EV a matter of taste, style or preference — but not, for much longer, a matter of cost.

Every year, that crossover point gets closer. In 2017, a BloombergNEF analysis forecast that the crossover point was in 2026, nine years out. In 2018, the crossover point was in 2024 — six years (or, as I described it then, two lease cycles) out.

The crossover point, per the latest analysis, is now 2022 for large vehicles in the European Union. For that, we can thank the incredible shrinking electric vehicle battery, which isn’t so much shrinking in size as it is shrinking — dramatically — in cost.


The hydrogen economy has taken longer than many expected - here’s a good signal of its continued progress - maybe it’s around the corner.

Clean fuel cells could be cheap enough to replace gas engines in vehicles

Advancements in zero-emission fuel cells could make the technology cheap enough to replace traditional gasoline engines in vehicles, according to researchers at the University of Waterloo.

The researchers have developed a new fuel cell that lasts at least 10 times longer than current technology, an improvement that would make them economically practical, if mass-produced, to power vehicles with electricity.

"With our design approach, the cost could be comparable or even cheaper than gasoline engines," said Xianguo Li, director of the Fuel Cell and Green Energy Lab at Waterloo. "The future is very bright. This is clean energy that could boom."

A paper on their work, Enhancing fuel cell durability for fuel cell plug-in hybrid electric vehicles through strategic power management, appears in the journal Applied Energy.


Another signal of several things - a new line of evidence related to climate change as well as how AI and machine learning are providing fundamentally new ways to analyse data for new patterns.
"We are seeing more El Niños forming in the central Pacific Ocean in recent decades, which is unusual across the past 400 years," said lead author Dr. Mandy Freund.

Impossible research produces 400-year El Nino record, revealing startling changes

Australian scientists have developed an innovative method using cores drilled from coral to produce a world first 400-year long seasonal record of El Niño events, a record that many in the field had described as impossible to extract.

The record published today in Nature Geoscience detects different types of El Niño and shows the nature of El Niño events has changed in recent decades.

This understanding of El Niño events is vital because they produce extreme weather across the globe with particularly profound effects on precipitation and temperature extremes in Australia, South East Asia and the Americas.

The 400-year record revealed a clear change in El Niño types, with an increase of Central Pacific El Niño activity in the late 20th Century and suggested future changes to the strength of Eastern Pacific El Niños.


Moving beyond facial recognition AI is progress to whole body recognition and as with all technology this is both exciting and hugely scary.

A multi-scale body-part mask guided attention network for person re-identification

Person re-identification entails the automated identification of the same person in multiple images from different cameras and with different backgrounds, angles or positions. Despite recent advances in the field of artificial intelligence (AI), person re-identification remains a highly challenging task, particularly due to the many variations in a person's pose, as well as other differences associated with lighting, occlusion, misalignment and background clutter.

Researchers at the Suning R&D Center in the U.S. have recently developed a new technique for person re-identification based on a multi-scale body-part mask guided attention network (MMGA). Their paper, pre-published on arXiv, will be presented during the 2019 CVPR Workshop spotlight presentation in June.

"Person re-identification is becoming a more and more important task due to its wide range of potential applications, such as criminal investigation, public security and image retrieval," Honglong Cai, one of the researchers who carried out the study, told TechXplore. "However, it remains a challenging task, due to occlusion, misalignment, variation of poses and background clutter. In our recent study, our team tried to develop a method to overcome these challenges."


And if we are worried about ‘deep fakes’ we move from full body recognition to even deeper fakes. Hollywood will be both excited and scared.

Amazing AI Generates Entire Bodies of People Who Don’t Exist

The algorithm whips up photorealistic models and outfits from scratch.
A new deep learning algorithm can generate high-resolution, photorealistic images of people — faces, hair, outfits, and all — from scratch.

The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media.


Another signal of advancing capacity to access media via multichannels simultaneously. This is worth the view.

Android Q’s Live Caption feature adds real-time subtitles to any audio or video playing on your phone

It even works when using video chat apps
Google waited until I/O 2019 to demonstrate one of the most impressive features of Android Q. It’s called Live Caption, and when enabled, you’ll see any video or audio you play on your phone transcribed in real time — with extremely accurate results. Live Captions are overlaid on top of whatever app you’re using, be it YouTube, Instagram, Pocket Casts, or anything else, and it also supports video chat apps like Skye and Google’s own Duo. It’ll even work with video or audio that you record yourself.

“For 466 million deaf and hard of hearing people around the world, captions are more than a convenience — they make content more accessible. We worked closely with the Deaf community to develop a feature that would improve access to digital media,” Google wrote in a blog post.


There are some who still feel prediction of the future is possible - here’s an interesting signal about the progress in prediction.
Key developments in observation, numerical modeling, and data assimilation have enabled these advances in forecast skill. Improved observations, particularly by satellite remote sensing of the atmosphere and surface, provide valuable global information many times per day. Much faster and more powerful computers, in conjunction with improved understanding of atmospheric physics and dynamics, allow more-accurate numerical prediction models. Finally, improved techniques for putting data and models together have been developed.

Advances in weather prediction

In 1938, an intense hurricane struck the New England coast of the United States without warning, killing more than 600 people. Since then, death tolls have dropped dramatically even though coastal populations have swelled. Many people and organizations contributed to this improvement. But, as the American Meteorological Society celebrates its 100th anniversary, the improvement in forecasting stands out. Modern 72-hour predictions of hurricane tracks are more accurate than 24-hour forecasts were 40 years ago (see the figure), giving sufficient time for evacuations and other preparations that save lives and property. Similar improvements in forecasting tropical cyclone tracks have been achieved by other leading agencies worldwide.

Weather forecasts from leading numerical weather prediction centers such as the European Centre for Medium-Range Weather Forecasts (ECMWF) and National Oceanic and Atmospheric Administration's (NOAA's) National Centers for Environmental Prediction (NCEP) have also been improving rapidly: A modern 5-day forecast is as accurate as a 1-day forecast was in 1980, and useful forecasts now reach 9 to 10 days into the future (1). Predictions have improved for a wide range of hazardous weather conditions, including hurricanes, blizzards, flash floods, hail, and tornadoes, with skill emerging in predictions of seasonal conditions.

Because data are unavoidably spatially incomplete and uncertain, the state of the atmosphere at any time cannot be known exactly, producing forecast uncertainties that grow into the future. This sensitivity to initial conditions can never be overcome completely. But, by running a model over time and continually adjusting it to maintain consistency with incoming data, the resulting physically consistent predictions greatly improve on simpler techniques. Such data assimilation, often done using four-dimensional variational minimization, ensemble Kalman filters, or hybridized techniques, has revolutionized forecasting.


This is an summary of a new book by Stuart Kauffman - a nice complementary piece to the limits of predictability and the development of an understanding of life and evolution that extends beyond a ‘Physics Worldview’ of causality.

The new physics needed to probe the origins of life

Stuart Kauffman’s provocative take on emergence and evolution energizes Sara Imari Walker.
A World Beyond Physics: The Emergence and Evolution of Life Stuart A. Kauffman Oxford University Press (2019)
Among the great scientific puzzles of our time is how life emerged from inorganic matter. Scientists have probed it since the 1920s, when biochemists Alexsandr Oparin and J. B. S. Haldane (separately) investigated the properties of droplets rich in organic molecules that existed in a ‘prebiotic soup’ on the primitive Earth…

What was missing then, as now, is a concrete theory for the physics of what life is, testable against experiment — which is likely to be more universal than the chemistry of life on Earth. Decades after Oparin and Haldane, Erwin Schrödinger’s 1944 book What Is Life? attempted to lay conceptual foundations for such a theory. Yet, more than 70 years and two generations of physicists later, researchers still ponder whether the answers lie in unknown physics. No one has led the charge on these questions quite like Stuart Kauffman.

His key insight is motivated by what he calls “the nonergodic world” — that of objects more complex than atoms. Most atoms are simple, so all their possible states can exist over a reasonable period of time. Once they start interacting to form molecules, the number of possible states becomes mind-bogglingly massive. Only a tiny number of proteins that are modestly complex — say, 200 amino acids long — have emerged over the entire history of the Universe. Generating all 20200 of the possibilities would take aeons. Given such limitations, how does what does exist ever come into being?

This is where Kauffman expands on his autocatalytic-sets theory, introducing concepts such as closure, in which processes are linked so that each drives the next in a closed cycle. He posits that autocatalysing sets (of RNA, peptides or both) encapsulated in a sphere of lipid molecules could form self-reproducing protocells. And he speculates that these protocells could evolve. Thus, each new biological innovation begets a new functional niche fostering yet more innovation. You cannot predict what will exist, he argues, because the function of everything biology generates will depend on what came before, and what other things exist now, with an ever-expanding set of what is possible next.

Because of this, Kauffman provocatively concludes, there is no mathematical law that could describe the evolving diversity and abundance of life in the biosphere. He writes: “we do not know the relevant variables prior to their emergence in evolution.” At best, he argues, any ‘laws of life’ that do exist will describe statistical distributions of aspects of that evolution. For instance, they might predict the distribution of extinctions. Life’s emergence might rest on the foundations of physics, “but it is not derivable from them”, Kauffman argues.

Thursday, May 2, 2019

Friday Thinking 3 May 2019

Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning.
Work that engages our whole self becomes play that works.
Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  
In the 21st century - the planet is the little school house in the galaxy.
Citizenship is the battlefield of the 21st  Century

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:


Articles:



The methods developed so far really think about genetics and environment as separate and orthogonal, as independent factors. When in truth, they’re not independent.

New Turmoil Over Predicting the Effects of Genes



In medicine, believing something is true is not the same as being able to prove it. Because the idea that inflammation—constant, low-level, immune-system activation —could be at the root of many noncommunicable diseases is a startling claim, it requires extraordinary proof. Can seemingly unconnected illnesses of the brain, the vasculature, lungs, liver, and joints really share a deep biological link? Evidence has been mounting that these common chronic conditions—including Alzheimer’s, cancer, arthritis, asthma, gout, psoriasis, anemia, Parkinson’s disease, multiple sclerosis, diabetes, and depression among them—are indeed triggered by low-grade, long-term inflammation. But it took that large-scale human clinical trial to dispel any lingering doubt: the immune system’s inflammatory response is killing people by degrees.

Could inflammation be the cause of myriad chronic conditions?



Medical devices embedded deep in human flesh. Mushrooms growing designer chairs. Engineered probiotic bacteria colonising the guts of soldiers. Implants; fungal factories; bacteria. All three are “biodesigns”, yet each is a product of a very different discipline: biomedical engineering, design, and synthetic biology. Over the last twenty years, each field has in turn claimed the fusing of biology and design as their own. If design is humanity’s process for changing present conditions to other, preferred ones (to paraphrase political scientist Herbert Simon), then biodesign—which we broadly define here as the design of, with, or from biology—offers novel perspectives on what change could look like, for ourselves and other living things. Altered or designed by humans, these organisms could populate “other biological futures”; possible futures different to those dictated by our planet’s naturally evolved present.

Other Biological Futures



Epistemology is the term that describes how we know what we know. Most people who think about knowledge think about the processes of obtaining it. Ignorance is often assumed to be not-yet-knowledgeable. But what if ignorance is strategically manufactured? What if the tools of knowledge production are perverted to enable ignorance? In 1995, Robert Proctor and Iain Boal coined the term “agnotology” to describe the strategic and purposeful production of ignorance. In an edited volume called Agnotology, Proctor and Londa Schiebinger collect essays detailing how agnotology is achieved. Whether we’re talking about the erasure of history or the undoing of scientific knowledge, agnotology is a tool of oppression by the powerful.


What’s at stake right now is not simply about hate speech vs. free speech or the role of state-sponsored bots in political activity. It’s much more basic. It’s about purposefully and intentionally seeding doubt to fragment society.To fragment epistemologies. This is a tactic that was well-honed by propagandists.


The goal is no longer just to go straight to the news media. It’s to first create a world of content and then to push the term through to the news media at the right time so that people search for that term and receive specific content. Terms like caravan, incel, crisis actor. By exploiting the data void, or the lack of viable information, media manipulators can help fragment knowledge and seed doubt.

Agnotology and Epistemological Fragmentation




Scientific advances are accelerating - we all know that. A key implication is the proliferation of new knowledge domains and new fields and methods of inquiry. Here is a good signal of a new domain of study.
“We’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously,” Rahwan said. “This calls for a new field of scientific study that looks at them not solely as products of engineering and computer science but additionally as a new class of actors with their own behavioral patterns and ecology.”

Studying the behavior of AI

A new paper frames the emerging interdisciplinary field of machine behavior
As our interaction with “thinking” technology rapidly increases, a group led by researchers at the MIT Media Lab are calling for a new field of research—machine behavior—which would take the study of artificial intelligence well beyond computer science and engineering into biology, economics, psychology, and other behavioral and social sciences.


“We need more open, trustworthy, reliable investigation into the impact intelligent machines are having on society, and so research needs to incorporate expertise and knowledge from beyond the fields that have traditionally studied it,” said Iyad Rahwan, who leads the Scalable Cooperation group at the Media Lab.


Rahwan, Manuel Cebrian and Nick Obradovich, along with other scientists from the Media Lab convened colleagues at the Max Planck Institutes, Stanford University, the University of California San Diego, and other educational institutions as well as from Google, Facebook, and Microsoft, to publish a paper in Nature making a case for a wide-ranging scientific research agenda aimed at understanding the behavior of artificial intelligence systems.

This is a must see 15 min TED Talk - from one of the journalists that broke the Cambridge Analytica story - For anyone concerned with the future of democracy in the digital environment.

Facebook's role in Brexit — and the threat to democracy

In an unmissable talk, journalist Carole Cadwalladr digs into one of the most perplexing events in recent times: the UK's super-close 2016 vote to leave the European Union. Tracking the result to a barrage of misleading Facebook ads targeted at vulnerable Brexit swing voters -- and linking the same players and tactics to the 2016 US presidential election -- Cadwalladr calls out the "gods of Silicon Valley" for being on the wrong side of history and asks: Are free and fair elections a thing of the past?

The future of management? This is a bleak signal.

How Amazon automatically tracks and fires warehouse workers for ‘productivity’

Documents show how the company tracks and terminates workers
Amazon’s fulfillment centers are the engine of the company — massive warehouses where workers track, pack, sort, and shuffle each order before sending it on its way to the buyer’s door.


Critics say those fulfillment center workers face strenuous conditions: workers are pressed to “make rate,” with some packing hundreds of boxes per hour, and losing their job if they don’t move fast enough. “You’ve always got somebody right behind you who’s ready to take your job,” says Stacy Mitchell, co-director of the Institute for Local Self-Reliance and a prominent Amazon critic.


The automation aspect: Amazon tracks every individual worker’s productivity, and automatically generates warnings or even terminations without any input from supervisors, the company said. Managers can override the process, but it didn’t say how regularly they do. Amazon says that the termination process can be appealed.

The surveilled world is here and spreading - one key in creating conditions for democracy could be robust legislation and technology enabling ‘sousveillance’

Made in China, Exported to the World: The Surveillance State

In Ecuador, cameras capture footage to be examined by police and domestic intelligence. The surveillance system’s origin: China.
The squat gray building in Ecuador’s capital commands a sweeping view of the city’s sparkling sprawl, from the high-rises at the base of the Andean valley to the pastel neighborhoods that spill up its mountainsides.


The police who work inside are looking elsewhere. They spend their days poring over computer screens, watching footage that comes in from 4,300 cameras across the country.


The high-powered cameras send what they see to 16 monitoring centers in Ecuador that employ more than 3,000 people. Armed with joysticks, the police control the cameras and scan the streets for drug deals, muggings and murders. If they spy something, they zoom in.


This voyeur’s paradise is made with technology from what is fast becoming the global capital of surveillance: China.

Remember the argument - if you have nothing to hide then you should worry about surveillance? This is an interesting signal to counter that argument.

Manspreading on the Beijing subway could give you bad social credit

China plans to roll out a national social credit system by 2020
Manspreading is considered by many a breach of subway etiquette. It became such a scourge in 2015 that New York transport authorities started putting up posters asking men to keep their legs together. In Beijing, though, space hoarders could soon face consequences more serious than just the scorn of fellow riders.


The Beijing Municipal Commission of Transportation is proposing to link a passenger’s bad behavior on the subway with social credit. Those who take up extra seats on trains will be marked as “uncivilized” -- and so will those who eat during rides, hawk goods to other passengers or sneak into the subway without paying.

It seems that the legalization of Cannabis is spreading and with it a new openness to other ways to alter one’s consciousness. This is likely very important as the complexity of the challenges we face may mean we also have to (literally) change our minds. This is a good signal of that trend.
“This new Centre represents a watershed moment for psychedelics science; symbolic of its now mainstream recognition. Psychedelics are set to have a major impact on neuroscience and psychiatry in the coming years. It’s such a privilege to be at the forefront of one of the most exciting areas in medical science. I am immensely grateful to the donors who have made all of this possible."

Imperial launches world’s first Centre for Psychedelics Research

The first formal centre for psychedelic research in the world will launch at Imperial College London today.
Funded by more than £3 million from five founding donors, the new Imperial Centre for Psychedelic Research will build on over a decade of pioneering work in this area carried out at Imperial, including a clinical trial that has kick-started global efforts to develop psilocybin therapy into a licensed treatment for depression. It will also investigate their potential for treating other conditions, including anorexia.


Led by Dr Robin Carhart-Harris, the Centre will focus on two main research themes: the use of psychedelics in mental health care; and as tools to probe the brain’s basis of consciousness.


The newly established Centre will be based at Imperial’s Hammersmith campus, sharing space between Imperial College London and the Imperial College Healthcare NHS Trust.


The Centre also aims to develop a research clinic that could help to gather additional clinical evidence and become a prototype for the licensed psychedelic care facilities of the future.

Although this is still a weak signal - it seems like it will inevitably emerge as a part of brain-machine interfaces.

Brain signals translated into speech using artificial intelligence

Technology could one day be used to help people who can’t talk to communicate.
In an effort to provide a voice for people who can’t speak, neuroscientists have designed a device that can transform brain signals into speech.


This technology isn’t yet accurate enough for use outside the lab, although it can synthesize whole sentences that are mostly intelligible. Its creators described their speech-decoding device in a study published on 24 April in Nature.


Scientists have previously used artificial intelligence to translate single words, mostly consisting of one syllable, from brain activity, says Chethan Pandarinath, a neuroengineer at Emory University in Atlanta, Georgia, who co-wrote a commentary accompanying the study. “Making the leap from single syllables to sentences is technically quite challenging and is one of the things that makes the current work so impressive,” he says.


Creating the audible synthetic sentences required years of analysis after brain signals were recorded, and the technique is not ready to be used outside of a lab. Still, the work shows “that just using the brain, it is possible to decode speech,” says UCSF speech scientist Gopala Anumanchipalli.

This is a great signal with a lot of promise for transforming our agriculture and providing a new source of food.

This tiny new grain could save the planet

There's a new food we are all likely to hear a lot more about in the future. Developed from wheatgrass, 'Kernza' is being hailed as a weapon against climate change that could also protect the environment and revolutionize farming. Big claims for a grain that is but one-fifth the size of wheat.


Americans can already buy pasta, pizza, bread and beer made with the grain, which was trademarked 'Kernza' by the Land Institute in Kansas. ...the grain has a much more serious purpose. Unlike wheat, barley and other cereals it is a perennial plant whose roots can be left in the ground to regrow after harvesting.


That removes the need to clear fields, plough and reseed every year, saving energy and reducing farmers’ carbon emissions. Kernza can also be harvested using existing farm machinery. Kernza roots extend over 3 metres beneath the soil, more than twice the depth of wheat, helping to stabilise soil, retain water and improve wildlife habitat.


While wheat has been around for more than 10,000 years, Kernza is the new kid on the block. It was first bred under two decades ago and there are currently less than 500 hectares of the crop under cultivation.

Another signal of emerging AI and micro medical tools for new forms of surgery and treatment.

A robotic catheter has autonomously wound its way inside a live, beating pig’s heart

The device was inspired by the way cockroaches feel their way along tunnels.
Operating inside a beating heart is a complex, delicate procedure that requires skilled surgeons. Medical personnel typically use joysticks and a combination of x-rays or ultrasound to carefully guide catheters through the body.


Now, for the first time, a robotic catheter has been able to autonomously navigate inside a heart to help carry out a particularly complex procedure. The device, which was inspired by the way certain animals learn about their surroundings, was used to help surgeons close leakages within the hearts of five live pigs.


The device is 8 millimeters across, with a camera and an LED light on its tip that work as a combined optic and touch sensor. A machine-learning algorithm that was trained on approximately 2,000 heart-tissue images was used to guide it as it moved. The touch sensor periodically tapped against the heart’s tissue as the device wound its way through, helping it know where it was and making sure it wasn’t likely to damage the tissue.

A great weak signal of nanobots for dentistry. Nanobot toothbrushes.
"Treating biofilms that occur on teeth requires a great deal of manual labor, both on the part of the consumer and the professional," adds Steager. "We hope to improve treatment options as well as reduce the difficulty of care."

An army of microrobots can wipe out dental plaque

A team of engineers, dentists, and biologists from the University of Pennsylvania developed a microscopic robotic cleaning crew. With two types of robotic systems—one designed to work on surfaces and the other to operate inside confined spaces—the scientists showed that robots with catalytic activity could ably destroy biofilms, sticky amalgamations of bacteria enmeshed in a protective scaffolding. Such robotic biofilm-removal systems could be valuable in a wide range of potential applications, from keeping water pipes and catheters clean to reducing the risk of tooth decay, endodontic infections, and implant contamination.


The work, published in Science Robotics, was led by Hyun (Michel) Koo of the School of Dental Medicine and Edward Steager of the School of Engineering and Applied Science.


"This was a truly synergistic and multidisciplinary interaction," says Koo. "We're leveraging the expertise of microbiologists and clinician-scientists as well as engineers to design the best microbial eradication system possible. This is important to other biomedical fields facing drug-resistant biofilms as we approach a post-antibiotic era."

This is a fascinating weak signal suggesting a future form of treatment for muscle and other injuries - perhaps eventually to enhance strength, flexibility and resilience.
"We may be able to one day embed these structures under the skin, and the [coil] material would eventually be digested, while the new cells stay put," Guo says. "The nice thing about this method is, it's really general, and we can try different materials. This may push the limit of tissue engineering a lot."

'Nanofiber yarn' makes for stretchy, protective artificial tissue

The human body is held together by an intricate cable system of tendons and muscles, engineered by nature to be tough and highly stretchable. An injury to any of these tissues, particularly in a major joint like the shoulder or knee, can require surgical repairs and weeks of limited mobility to fully heal.


Now MIT engineers have come up with a tissue engineering design that may enable flexible range of motion in injured tendons and muscles during healing.
The team has engineered small coils lined with living cells, that they say could act as stretchy scaffolds for repairing damaged muscles and tendons. The coils are made from hundreds of thousands of biocompatible nanofibers, tightly twisted into coils resembling miniature nautical rope, or yarn.


The researchers coated the yarn with living cells, including muscle and mesenchymal stem cells, which naturally grow and align along the yarn, into patterns similar to muscle tissue. The researchers found the yarn's coiled configuration helps to keep cells alive and growing, even as the team stretched and bent the yarn multiple times.


In the future, the researchers envision doctors could line patients' damaged tendons and muscles with this new flexible material, which would be coated with the same cells that make up the injured tissue. The "yarn's" stretchiness could help maintain a patient's range of motion while new cells continue to grow to replace the injured tissue.

This is a weak signal - but one that has been anticipated for a long time - hacking matter to create almost living robots.

Cornell scientists create ‘living’ machines that eat, grow, and evolve


The field of robotics is going through a renaissance thanks to advances in machine learning and sensor technology. Each generation of robot is engineered with greater mechanical complexity and smarter operating software than the last. But what if, instead of painstakingly designing and engineering a robot, you could just tear open a packet of primordial soup, toss it in the microwave on high for two minutes, and then grow your own ‘lifelike’ robot?
If you’re a Cornell research team, you’d grow a bunch and make them race.


Basically, the Cornell team grew their own robots using a DNA-based bio-material, observed them metabolizing resources for energy, watched as they decayed and grew, and then programmed them to race against each other. We would have made them compete in a karaoke competition, but Cornell’s application is also impressive.


As unbelievable as it sounds, the team is actually just getting started. Lead author on the team’s paper, Shogo Hamada, told The Stanford Chronicle that “ultimately, the system may lead to lifelike self-reproducing machines.”


This work is still in its infancy, but the implications of organically grown, self-reproducing machines are incredible. And the debate over whether robots can be “alive” will likely have an entire new chapter to discuss soon.

A good signal of the ongoing work in domesticating DNA and deepening our knowledge for a flourishing future.
The five-year, $2.6 million Redwood Genome Project, funded by San Francisco’s Save the Redwoods League, was started in 2017 and is the most intensive scientific study ever done on the state’s famous primeval forests.
“You think of plants generally — they don't have brains, so they can’t be that complicated, but a redwood has to stay in the same place for thousands of years and fight off everything that comes its way,” said Steven Salzberg, a professor of biomedical engineering at Johns Hopkins University in Baltimore, who skippered the sequencing work. “It has to have a pretty robust ability to fight off fungi, microbes, insects, beetles, and a vast array of temperatures and humidities.”

California scientists unravel genetic mysteries of world’s tallest trees

Scientists have unlocked the genetic codes of California’s most distinguished, longest-lasting residents — coast redwood and giant sequoia trees — in what is a major breakthrough in the quest to protect the magnificent forests from the ravages of climate change, researchers announced Tuesday.


The sequencing of the towering conifers’ genomes is being presented as a transformational moment for the ancient groves because it will allow scientists to figure out which trees are best suited for a warmer, more volatile future.


It turns out the coast redwood genome has six sets of chromosomes and 27 billion base pairs of DNA. That’s nine times the size of the human genome, which has a meager two sets of chromosomes. It even puts to shame the giant sequoia, which has more than 8 billion base pairs of DNA and is roughly three times the size of the human genome.

This is perhaps a weak signal - not just about quantum physics - but also pointing to emerging breakthroughs in fundamental science.

Quantum Physics Experiment Suggests That Reality Isn’t Objective

When it comes to quantum physics, there may be no such thing as a shared objective reality.
A new quantum physics experiment just lent evidence to a mind-boggling idea that was previously limited to the realm of theory, according to the MIT Technology Review — that under the right conditions, two people can observe the same event, see two different things happen, and both be correct.


According to research shared to the preprint server arXiv on Tuesday, physicists from Heriot-Watt University demonstrated for the first time how two people can experience different realities by recreating a classic quantum physics thought experiment.

This is another signal of how much there is to learn - who could predict the affordances of two sheets of carbon twisted to a weird angle.
It’s exceptionally difficult to twist two sheets of graphene exactly 1.1 degrees out of alignment. But this “magic angle” leads to extraordinary effects. “I couldn’t believe it,” said one scientist. “I mean I actually found it beyond belief.”

With a Simple Twist, a ‘Magic’ Material Is Now the Big Thing in Physics

The stunning emergence of a new type of superconductivity with the mere twist of a carbon sheet has left physicists giddy, and its discoverer nearly overwhelmed.
The discovery has been the biggest surprise to hit the solid-state physics field since the 2004 Nobel Prize–winning discovery that an intact sheet of carbon atoms — graphene — could be lifted off a block of graphite with a piece of Scotch tape. And it has ignited a frenzied race among condensed-matter physicists to explore, explain and extend the MIT results, which have since been duplicated in several labs.


The observation of superconductivity has created an unexpected playground for physicists. The practical goals are obvious: to illuminate a path to higher-temperature superconductivity, to inspire new types of devices that might revolutionize electronics, or perhaps even to hasten the arrival of quantum computers. But more subtly, and perhaps more important, the discovery has given scientists a relatively simple platform for exploring exotic quantum effects. “There’s an almost frustrating abundance of riches for studying novel physics in the magic-angle platform,” said Cory Dean, a physicist at Columbia University who was among the first to duplicate the research.

And quantum phenomena may now become perceivable to human sensorium.
If humans can see single photons, an observer can play a direct role in a test of local realism
Exactly why and how superposition states collapse to definite outcomes is still one of the great mysteries of physics today. Testing quantum mechanics with a new, unique, ready-to-hand measurement apparatus – the human visual system – could rule out or provide evidence for certain theories.

Seeing the quantum

The human eye is a surprisingly good photon detector. What can it spy of the line between the quantum and classical worlds?
Using a single-photon source based on spontaneous parametric downconversion, and a forced-choice experimental design, there are now two possible experiments that could bring quantum weirdness into the realm of human perception: a test using superposition states, and what’s known as a ‘Bell test’ of non-locality using a human observer.


Superposition is a uniquely quantum concept. Quantum particles such as photons are described by the probability that a future measurement will find them in a particular location – so, before the measurement, the thinking is that they can be in two (or more!) places at once. This idea applies not just to particles’ locations, but to other properties, such as polarisation, which refers to the orientation of the plane along which they propagate in the form of waves. Measurement seems to make particles ‘collapse’ to one outcome or another, but no one knows exactly how or why the collapse happens.


The human visual system provides interesting new ways to investigate this problem. One simple but spooky test would be to determine whether humans perceive a difference between a photon in a superposition state and a photon in a definite location. Physicists have been interested in this question for years, and have proposed several approaches – but for the moment let’s consider the single-photon source described above, delivering a photon to either the left or the right of an observer’s eye.

The signals indicating the approach of the autonomous car are increasingly strong - Tesla has created a customized chipset - but Alphabet’s (Google) efforts are approaching primetime.

Waymo is turning to Detroit to build its first self-driving car factory

Alphabet subsidiary Waymo, not to be outdone by its sister company Wing, announced today (April 23) that it had selected a facility in Detroit, Michigan, to house the company’s first factory dedicated to building autonomous vehicles. The company first hinted at working in Detroit back in January.


Waymo is working with the component company American Axle & Manufacturing to convert an existing factory in the traditional heart of the US’s car-making industry up and have it running before the end of 2019.


Waymo will be leasing a building on American Axle’s campus and refitting it. The facility will primarily be used to install autonomous hardware and software in Chrysler Pacifica minivans and Jaguar I-Pace SUVs the company has purchased, Waymo told Quartz.


The move comes as fewer Americans are getting drivers’ licenses, more people are using ride-sharing services instead of owning cars, and startups are making cars that are beginning to drive themselves. Waymo’s choice suggests that even in a time of great upheaval in the US auto industry, tech companies interested in automotive technologies are still turning to those who historically have had the expertise.


Waymo has been expanding its commercial autonomous ride-hailing test service in the Phoenix, Arizona area, ahead of a wider launch.

The signals for autonomous cars are ever stronger - but the personal flying ‘chariot’ remain relatively weak. The images of the five winners are worth the view.

Meet the 5 Winners Of GoFly Phase II

Congratulations to all of our innovators and teams for completing Phase II of the GoFly challenge! We were incredibly impressed with the technical prowess and creativity of the entries—in fact, we received so many stellar high quality entries that we have added an extra prize, and will now be awarding 5 Phase II Teams with prizes. Each of these teams will receive $50,000 in prizes.


The GoFly community is comprised of more than 3,500 innovators from 101 countries across the globe. Of these innovators, 31 Phase II Teams across 16 countries submitted entries for review by a panel of experts across 2 rounds of rigorous judging. These Phase II teams were required to submit visual and written documentation detailing their personal flyer prototypes. It’s the first time physical prototypes were introduced into the challenge, and this crucial step has brought us ever closer to the Final Fly-Off.