Thursday, February 6, 2020

Friday Thinking 7 Feb 2020

Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning.
Work that engages our whole self becomes play that works.
Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  
In the 21st century - the planet is the little school house in the galaxy.
Citizenship is the battlefield of the 21st  Century

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:



I think somebody talked about the law of unexpected consequences. It's something that nearly always happens. Unfortunately, the current scientific system is not designed to understand the big picture. The universities claim to teach the sciences to students. They don't. All they teach them is how to pass an exam. The universities should be dissolved. The very idea of teaching separate subjects in separate buildings, isn't that madness?

I learned most of it in the lab, not in college. Engineers, unlike scientists, need to see the big picture. The Wright brothers' plane was not created by scientists, but out on the beach by two practitioners. I'm sure that those craftsmen who built the cathedrals in Europe and in Britain were wondrous engineers. They had no computers. And they were craftsmen who followed their instincts and built the cathedrals that have lasted all these years.

James Lovelock's Recipes for Saving the Planet




 if we believe that change must come from “the top”, then those of us not there will sit around waiting for it to happen. If we believe that the wealth of globalization will trickle down to everyone, then we will take what we get. If we believe that democracy is about swinging between the public controls of government on the  left and the private forces of markets on the right, then we will not see the role that communities in the plural sector must play in buttressing the power of the other two sectors. With beliefs like these reframed, we can see our way to constructive action, ranging from creating social enterprises not tethered to the stock market all the way to establishing a Peace Council to renew global government.

Henry Mintzberg - Next step: What can we do now?




What Futures Thinking inspired me in the context of Personal Futures is that instead of “I want to be this person,” we can reframe this desire as “this is the environment I want to be a part of,” and “I am a part of this future.”

Better selves for better futures




As we speak, there are something like 3,000 different immunotherapy trials underway.

The FDA has approved checkpoint inhibitor therapy for non-small cell lung cancer, small cell lung cancer, head and neck cancer, bladder cancer, Hodgkin lymphoma, some kind of diffuse large B-cell lymphoma, esophageal cancer, and some colorectal cancer that has defects to DNA damage. In fact, immunotherapy has been approved for any cancer type that has a defect in DNA damage repair called microsatellite instability.

The Contrarian Who Cures Cancers




Disruptive innovation describes a process by which a product or service powered by a technology enabler initially takes root in simple applications at the low end of a market — typically by being less expensive and more accessible — and then relentlessly moves upmarket, eventually displacing established competitors. Disruptive innovations are not breakthrough innovations or “ambitious upstarts” that dramatically alter how business is done but, rather, consist of products and services that are simple, accessible, and affordable. These products and services often appear modest at their outset but over time have the potential to transform an industry. Robert Merton talked about the idea of “obliteration by incorporation,” where a concept becomes so popularized that its origins are forgotten. I fear that has happened to the core idea of the theory of disruption, which is important to understand because it is a tool that people can use to predict behavior. That’s its value — not just to predict what your competitor will do but also to predict what your own company might do. It can help you avoid choosing the wrong strategy. 

Companies certainly know more about disruption than they did in 1995, but I still speak and write to executives who haven’t firmly grasped the implications of the theory. The forces that combine to cause disruption are like gravity — they are constant and are always at work within and around the firm. It takes very skilled and very astute leaders to be navigating disruption on a constant basis, and many managers are increasingly aware of how to do that.

And in my experience, it seems that it’s often easier for executives to spot disruptions occurring in someone else’s industry rather than their own, where their deep and nuanced knowledge can sometimes distract them from seeing the writing on the wall. That’s why theory is so important. The theory predicts what will happen without being clouded by personal opinion. I don’t have an opinion on whether a particular company is vulnerable to disruption or not — but the theory does. That’s why it’s such a powerful tool.

Disruption 2020: An Interview With Clayton M. Christensen




Here’s a signal of possibles - how we could re-imagine our social institutions based on concepts of democracy using 21st Century infrastructure.
trusted choices made by voters who select knowledgeable delegates would serve to channel and reinforce real expertise into the decision-making process, resulting in better decisions overall. Put differently, we could work our way back to a political decision-making process based primarily on reality, expertise, and facts, as opposed to one based on special interests, ignorance, and corruption.

An Introduction to Liquid Democracy

democratic institutions are unable to meet today’s major challenges. We are divided into confused and incoherent Red (Republican) and Blue (Democratic) teams that fail to address the people’s real needs and desires. Rather than face the severe local, state, national, and global problems before us — from infrastructure decay to climate change to ever-mounting national debt and ever-growing income inequality — we get gridlock and kicking the can down the road.

One alternative to the politics of Team Red and Team Blue is known as “Liquid Democracy.” Also called “delegative democracy” or “proxy democracy,” Liquid Democracy combines elements of both direct and representative democracy. It enables people to vote directly, or to assign their vote to individuals or organizations they trust. Liquid Democracy is designed to channel and leverage the collective expertise we need, to resist the corruption of money in politics, and to increase a felt sense of political participation with greater buy-in on decisions.

This essay explores the way it works, its likely benefits, and challenges to its implementation.


Another signal of the emerging transparencies of the 21st Century digital environment. As David Brin noted in “Transparent Society” we have a choice - a surveilled society where we don’t know who’s watching - or a radical transparency. Who better to watch the watchers than the watched. We were dragged kicking and screaming into the industrial constraint of ubiquitous anonymity and now we want to right to be forgotten.
People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Modern Mass Surveillance: Identify, Correlate, Discriminate

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we're in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it's being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let's take them in turn.


This is an amazing signal for anyone who loves music - the next several decades may awaken digital generations - whether they are natives or integrated immigrants. 
MIDI’s low level of resolution made it better suited to modeling Western music and music played instruments with discrete tones, like keyboards. Music which relies on notes outside of standard Western music, and music played on string instruments are not as well represented. Neely says that is particularly difficult to capture the sounds of Indian and Turkish music. For sophisticated MIDI users, these issues could be addressed, but they were challenging, and not all artists have the time or desire to get into the technical minutiae of programming MIDI.
These may now be issues of the past. In early January 2020, the MIDI Manufacturers Association, the nonprofit organization that manages MIDI, announced the release of MIDI 2.0. The new protocol involved years of work from the organization’s volunteers, and getting companies like Google, Apple, Microsoft, and all of the major music manufacturers on board.
The amount of choice MIDI 2.0 allows may now seem superfluous, but he thinks it’s hard to know what music will sound like 50 years from now. The recent MIDI update could allow people to build musical worlds we can’t yet imagine.

An update to a 37-year-old digital protocol could profoundly change the way music sounds

A lot of big things happened in music in 1983. It was the year Michael Jackson’s album Thriller hit number one across the world, compact discs were first released in the US, and the Red Hot Chili Peppers formed. Yet there was one obscure event that was more influential than all of them: MIDI 1.0 was released. MIDI stands for “Musical Instrument Digital Interface” and, after 37 years, it has finally received a major update. MIDI 2.0 is live, and it could mean the end of the keyboard’s dominance over popular music.

Whether you know it or not, MIDI has changed your music listening life. MIDI is the protocol by which digitized information is converted into audio. When a musician plays into a MIDI-enabled device, like a synthesizer or drum machine, MIDI is used to digitize the different elements of the music, like the note and the power with which it was played (a softly plucked C, for example, or a full-on fortissimo F-sharp). This allows music producers and technicians to adjust aspects of the music later on. For example, they might choose to change the pitch of certain notes or even switch the sound from a keyboard to a trumpet or guitar. Basically, it is what musicians use to program music. Ikutaro Kakehashi and Dave Smith, the leaders in creating MIDI in the early 1980s, rightfully won a Technical Grammy for their work in 2013.

Though MIDI has done an exceptional job of digitizing music for the last 37 years, it hasn’t been perfect. MIDI quantizes music, meaning it forces music components into a particular value. In MIDI 1.0, all data was in 7-bit values. That means musical qualities were quantized on a scale of 0 to 127. Features like volume, pitch, and how much of the sound should come out of the right or left speaker are all measured on this scale, with 128 possible points. This is not a lot of resolution. For some really sophisticated listeners, they can clearly hear the steps between points.


This is a very important signal of the future of work and public infrastructure necessary to enable such work - including a universal basic income, free life-long education, and public internet access.
He predicts humans and machines will keep working side by side, and at least for the foreseeable future, citizen scientists will still be needed to help train machine-learning algorithms. But he also envisions these volunteers making other important contributions. For instance, he argues that when looking through seemingly endless piles of images or historical records or even graphs of data, these amateurs are in the best position to notice something rare or unusual; experts tend to be too focused on the task at hand, and computers might not be trained to identify something out of the ordinary.

An astrophysicist honors citizen scientists in the age of big data

The Crowd and the Cosmos examines the role of amateurs in science
Astrophysicist Chris Lintott had a problem back in the mid-2000s. He wanted to know if the chemistry of star formation varies in different types of galaxies. But first he needed to sort through images of hundreds of thousands of galaxies to gather an appropriate sample to study. The task would take many months if not longer for one person, and computers at the time weren’t up to the challenge. So Lintott and colleagues turned to the public for help.

The group launched Galaxy Zoo in 2007. The website asked volunteers to classify galaxies by shape — spiral or elliptical. Interest in the project was overwhelming. On the first day, so many people logged on that the server hosting the images crashed. Once the technical difficulties were resolved, more than 70,000 image classifications soon came in every hour. And as Lintott would learn, amateurs were just as good as professionals at categorizing galaxies.

Galaxy Zoo’s success helped awaken other scientists to the potential of recruiting citizen scientists online to sift through large volumes of all sorts of data. That led to the birth of the Zooniverse, an online platform that lets anyone participate in real science. Projects on the platform ask volunteers to do everything from digitizing handwritten records from research ships to identifying animals caught on camera to sorting through telescope data to find signs of exoplanets.


This is a good comment on the state of current science modeling of climate. Not too long and worth the read.

Emissions – the ‘business as usual’ story is misleading

Stop using the worst-case scenario for climate warming as the most likely outcome — more-realistic baselines make for better policy.
More than a decade ago, climate scientists and energy modellers made a choice about how to describe the effects of emissions on Earth’s future climate. That choice has had unintended consequences which today are hotly debated. With the Sixth Assessment Report (AR6) from the Intergovernmental Panel on Climate Change (IPCC) moving into its final stages in 2020, there is now a rare opportunity to reboot.

In the lead-up to the 2014 IPCC Fifth Assessment Report (AR5), researchers developed four scenarios for what might happen to greenhouse-gas emissions and climate warming by 2100. They gave these scenarios a catchy title: Representative Concentration Pathways (RCPs)1. One describes a world in which global warming is kept well below 2 °C relative to pre-industrial temperatures (as nations later pledged to do under the Paris climate agreement in 2015); it is called RCP2.6. Another paints a dystopian future that is fossil-fuel intensive and excludes any climate mitigation policies, leading to nearly 5 °C of warming by the end of the century2,3. That one is named RCP8.5.

RCP8.5 was intended to explore an unlikely high-risk future2. But it has been widely used by some experts, policymakers and the media as something else entirely: as a likely ‘business as usual’ outcome. A sizeable portion of the literature on climate impacts refers to RCP8.5 as business as usual, implying that it is probable in the absence of stringent climate mitigation. The media then often amplifies this message, sometimes without communicating the nuances. This results in further confusion regarding probable emissions outcomes, because many climate researchers are not familiar with the details of these scenarios in the energy-modelling literature.

This is particularly problematic when the worst-case scenario is contrasted with the most optimistic one, especially in high-profile scholarly work. This includes studies by the IPCC, such as AR5 and last year’s special report on the impact of climate change on the ocean and cryosphere4. The focus becomes the extremes, rather than the multitude of more likely pathways in between.


This is another signal suggesting a transformation of our business models. Instead of banning products like plastic (which won’t change the essential business model) we should ban landfill, waterfill, and airfill. The consequence is that no products can end up as waste. All products would have to be designed for complete recovery of components and the end of their life - like all flourishing ecologies.

Vast amounts of valuable energy, nutrients, water lost in world's fast-rising wastewater streams

Vast amounts of valuable energy, agricultural nutrients, and water could potentially be recovered from the world's fast-rising volume of municipal wastewater, according to a new study by UN University's Canadian-based Institute for Water, Environment and Health (UNU-INWEH).

Today, some 380 billion cubic meters (m3 = 1000 litres) of wastewater are produced annually worldwide—5 times the amount of water passing over Niagara Falls annually—enough to fill Africa's Lake Victoria in roughly seven years, Lake Ontario in four, and Lake Geneva in less than three months.

Furthermore, the paper says, wastewater volumes are increasing quickly, with a projected rise of roughly 24% by 2030, 51% by 2050.
Among major nutrients, 16.6 million metric tonnes of nitrogen are embedded in wastewater produced worldwide annually, together with 3 million metric tonnes of phosphorus and 6.3 million metric tonnes of potassium. Theoretically, full recovery of these nutrients from wastewater could offset 13.4% of global agricultural demand for them.

Beyond the economic gains of recovering these nutrients are critical environmental benefits such as minimizing eutrophication—the phenomenon of excess nutrients in a body of water causing dense plant growth and aquatic animal deaths due to lack of oxygen.

The energy embedded in wastewater, meanwhile, could provide electricity to 158 million households—roughly the number of households in the USA and Mexico combined.


A small but important signal of the change in global energy geopolitics.

Scotland Is on Track to Hit 100 Percent Renewable Energy This Year

Scotland is on track to move its energy sector to 100 percent renewables by t he end of this year. That’s just in time to host the United Nations’ international climate talks in November. At least someone’s doing something right.

Environmental organization Scottish Renewables put together a report tracking the country’s renewable progress. It shows Scotland renewables provided 76 percent of the electricity consumption based on 2018 data in the report, and the percentage is expected to keep rising and will reach 100 percent soon. That’s because unlike many countries, Scotland is actually moving away from fossil fuels rapidly. Scots have completely kicked coal, shutting down the nation’s last coal-fired power plant in 2016. And it only has one working fossil fuel-based energy source left, a gas-fired plant in Aberdeenshire (though two more gas plants are slated to be built).


A good signal of progress in domesticating DNA.

CRISPR gene-editing corrects muscular dystrophy in pigs

Duchenne muscular dystrophy (DMD) is one of the most common and most devastating muscular diseases, greatly reducing patients’ quality of life and life expectancy. Now, researchers in Germany have managed to use the CRISPR gene-editing tool to correct the condition in pigs, bringing the treatment ever closer to human trials.

A protein called dystrophin is necessary for muscles to regenerate themselves, but people with DMD have a genetic mutation that removes the gene that produces dystrophin. That means that affected children usually begin to show symptoms of muscle weakness by age five, lose the ability to walk by about age 12, and rarely live through their 30s as their heart muscles give out.

Because it’s a genetic condition, DMD is a prime target for treatment with the gene-editing tool CRISPR. This system is prized for its ability to cut out problematic genes and replace them with more beneficial ones, and has been put to work treating cancer, HIV and forms of blindness.

In experiments in pigs, the researchers on the new study used CRISPR to correct the faulty dystrophin gene. That allowed the pigs to once again produce dystrophin proteins – although they were shorter than usual, they were still stable and functional. That improved the animals’ muscle function and life expectancy, and made them less likely to develop an irregular heartbeat.


A great signal for helping humans to manage the microbial and viral environments.

Unique new antiviral treatment made using sugar

New antiviral materials made from sugar have been developed to destroy viruses on contact and may help in the fight against viral outbreaks.

This new development from a collaborative team of international scientists shows promise for the treatment of herpes simplex (cold sore virus), respiratory syncytial virus, hepatitis C, HIV, and Zika virus to name a few. The team have demonstrated success treating a range of viruses in the lab—including respiratory infections to genital herpes.

The research is a result of a collaboration between scientists from The University of Manchester, the University of Geneva (UNIGE) and the EPFL in Lausanne, Switzerland. Although at a very early stage of development, the broad spectrum activity of this new approach could also be effective against newly prevalent viral diseases such as the recent coronavirus outbreak.

Publishing their work in the journal Science Advances the team showed that they successfully engineered new modified molecules using natural glucose derivatives, known as cyclodextrins. The molecules attract viruses before breaking them down on contact, destroying the virus and fighting the infection.


A weak signal of the potential of stem cells to address damage arising in our joints. This is a signal many hope will bring results.

Microrobot system regenerates knee cartilage in rabbits

A team of researchers affiliated with multiple institutions in China and one in Korea has developed a micro-robot system that regenerated knee cartilage in rabbits. In their paper published in the journal Science Advances, the group describes their system and how well it worked.

Prior research has shown that mesenchymal stem cells found in bone marrow and fat can be coaxed into growing into cartilage cells. And researchers have also found that stem cells can be used to repair damaged cartilage. The challenge is placing the cells in the body where they are needed and keeping them in place until they attach to the surrounding tissue. In this new effort, the researchers have created a system that was able to overcome these hurdles—at least in rabbits.

The researchers created tiny hollow balls with holes out of a polymer called PLGA. The balls were then covered with a mixture of ferumoxytol (an iron mixture) and chitosan (a type of sugar). The next step involved filling the balls with cultured mesenchymal stem cells. The balls were injected into the knees of test rabbits with damaged knee cartilage (the researchers cut notches into it) and the rabbits were fitted with magnets to keep the balls in place.

After three weeks, the researchers found that those rabbits with the treated knees showed signs of cartilage rejuvenation—and the hollow balls were degrading as expected. 


This is a good signal of the emerging environment of sensors, AI and automation.

Covariant Uses Simple Robot and Gigantic Neural Net to Automate Warehouse Picking

A massive neural network connects cameras, a robot arm, and a suction gripper in Covariant’s logistics system
There’s already a huge amount of automation in logistics, but as Abbeel explains, in warehouses there are two separate categories that need automation: “The things that people do with their legs and the things that people do with their hands.” The leg automation has largely been taken care of over the last five or 10 years through a mixture of conveyor systems, mobile retrieval systems, Kiva-like mobile shelving, and other mobile robots. “The pressure now is on the hand part,” Abbeel says. “It’s about how to be more efficient with things that are done in warehouses with human hands.”

A huge chunk of human-hand tasks in warehouses comes down to picking. That is, taking products out of one box and putting them into another box. In the logistics industry, the boxes are usually called totes, and each individual kind of product is referred to by its stock keeping unit number, or SKU. Big warehouses can have anywhere from thousands to millions of SKUs, which poses an enormous challenge to automated systems. As a result, most existing automated picking systems in warehouses are fairly limited. Either they’re specifically designed to pick a particular class of things, or they have to be trained to recognize more or less every individual thing you want them to pick. Obviously, in warehouses with millions of different SKUs, traditional methods of recognizing or modeling specific objects is not only impractical in the short term, but would also be virtually impossible to scale.

This is why humans are still used in picking—we have the ability to generalize. We can look at an object and understand how to pick it up because we have a lifetime of experience with object recognition and manipulation. We’re incredibly good at it, and robots aren’t. “From the very beginning, our vision was to ultimately work on very general robotic manipulation tasks,” says Abbeel. “The way automation’s going to expand is going to be robots that are capable of seeing what’s around them, adapting to what’s around them, and learning things on the fly.”

Covariant is tackling this with relatively simple hardware, including an off-the-shelf industrial arm (which can be just about any arm), a suction gripper (more on that later), and a straightforward 2D camera system that doesn’t rely on lasers or pattern projection or anything like that. What couples the vision system to the suction gripper is one single (and very, very large) neural network, which is what helps Covariant to be cost effective for customers. “We can’t have specialized networks,” says Abbeel. “It has to be a single network able to handle any kind of SKU, any kind of picking station. In terms of being able to understand what’s happening and what’s the right thing to do, that’s all unified. We call it Covariant Brain, and it’s obviously not a human brain, but it’s the same notion that a single neural network can do it all.”