Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.) that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.
Many thanks to those who enjoy this. ☺
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - Work is just beginning. Work that engages our whole self becomes play that works. Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How
“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9
Content
Quotes:
Articles:
People in companies are often stuck in narrow, repetitive patterns of conversation that provide them with numbing, repressive and even neurotic experiences. We should look at communication, not competences, as the most predictive group activity there is in forecasting viability and agility. The opportunity provided by interaction technologies lies in the widening and deepening of communication leading to new learning, to new voices taking part and to new enriching conversations that can cross siloed organizational units and stale process charts. A key management challenge today is to understand that the only way to guarantee agility and resilience is to actively and widely participate in the conversations that matter in an enriching way.
There are two distinctly different approaches to understanding the individual and the social. Mainstream thinking sees the social as a community, on a different level from the individuals who form it. The social is separate from the individuals. “I” and “we” are separate things and can be understood separately.
A totally different approach sees individuals themselves as thoroughly social.
In this way of thinking, we leave behind the western notion of the self-governing, independent individual for a different notion, of interdependent people whose identities are established in interaction with each other. From this perspective, individual change cannot be separated from changes in the groups to which an individual belongs. And changes in the groups don’t take place without the individuals changing. We form our groups and our followerships and they form us at the same time, all the time.
Identity is a pattern in time.
I am not I
So what has happened? The light begins to dawn when you look at the nutrition figures in more detail. Yes, we ate more in 1976, but differently. Today, we buy half as much fresh milk per person, but five times more yoghurt, three times more ice cream and – wait for it – 39 times as many dairy desserts. We buy half as many eggs as in 1976, but a third more breakfast cereals and twice the cereal snacks; half the total potatoes, but three times the crisps. While our direct purchases of sugar have sharply declined, the sugar we consume in drinks and confectionery is likely to have rocketed (there are purchase numbers only from 1992, at which point they were rising rapidly. Perhaps, as we consumed just 9 kcal a day in the form of drinks in 1976, no one thought the numbers were worth collecting.) In other words, the opportunities to load our food with sugar have boomed. As some experts have long proposed, this seems to be the issue.
The shift has not happened by accident. As Jacques Peretti argued in his film The Men Who Made Us Fat, food companies have invested heavily in designing products that use sugar to bypass our natural appetite control mechanisms, and in packaging and promoting these products to break down what remains of our defences, including through the use of subliminal scents. They employ an army of food scientists and psychologists to trick us into eating more than we need, while their advertisers use the latest findings in neuroscience to overcome our resistance.
We’re in a new age of obesity. How did it happen? You’d be surprised
In the Industrial Age, scalable efficiency drove value creation. The bargain of the Industrial Age was that, if consumers wanted affordable products and services, we would have to settle for standardized products and services – one size fits all. You can have any color as long as it’s black. It’s a bargain that drove the growth of mass consumer societies in developed and developing economies.
And it shaped the scalable efficiency model that drives virtually all our institutions today – the key to success is to become more and more efficient at scale. Efficiency requires tight specification and standardization of tasks throughout the institution and tightly integrating those tasks into end to end business processes. It is very much supply driven, since the demand was willing to settle for standardized products. The winners would be those who could produce the most cost-efficient products at scale. In a world of standardized products and tasks, context was largely irrelevant, a distraction. Focus on the standardized product and process.
The world is changing
The forces shaping the Big Shift are progressively undermining standardization and efficiency (as conventionally defined) as drivers of value creation. As consumers, we’re gaining more and more power and we’re less and less willing to settle for standardized products and services – we want offerings that are tailored to our unique and evolving needs. On the supply side, digital technology is making it easier and far more affordable to produce highly personalized products and services. That’s leading to more and more fragmentation in product and services businesses
What institutions will have the greatest impact in the future? It will be those who shift their focus and learn how to harness that data to generate greater insight into expanding levels of context and to see new opportunities to add value in those contexts.
Navigating From the Industrial Age to the Contextual Age
Our attention is a result of the filters we use and our actions are results of our sense making. The filters can be a mix of habits, access to media and usage of intelligent tools. Increasingly, and most importantly, these filters are social. They are the people in our network who we recognize. Our most valuable guides to useful bits of insight are trusted people, people whose activities we can follow to help us advance and make sense.
There can hardly be a follower without a leader. A lot of management research has focused on the leadership attributes of an individual and the hierarchical organization. Leading and following in the traditional sense have seen the leader making people follow him or her through motivation and rewards. The leader also decided who the followers should be.
Leading and following when seen as a two-sided, more symmetric, relationship, not as attributes of individuals, follow a very different dynamic. Leading in this new meaning is not a generic capability or position, but highly contextual.
Leading, then, is not position-based, but recognition-based.
A relational view to leadership
The underlying challenge of Climate Change is a crisis of consciousness - that humans are one species inhabiting one planet. From that single photograph taken by one of the first astronauts of the home he came from to this - a 6 min video of the next capacity to see the hyper-object we call earth. The transparent queeriable earth.
The mission to create a searchable database of Earth's surface | Will Marshall
What if you could search the surface of the Earth the same way you search the internet? Will Marshall and his team at Planet use the world's largest fleet of satellites to image the entire Earth every day. Now they're moving on to a new project: using AI to index all the objects on the planet over time -- which could make ships, trees, houses and everything else on Earth searchable, the same way you search Google. He shares a vision for how this database can become a living record of the immense physical changes happening across the globe. "You can't fix what you can't see," Marshall says. "We want to give people the tools to see change and take action."
While this is ‘only’ the mouse brain - it’s represents another strong signal of the increasing comprehensive maps we are making of ourselves.
It’s an “amazing paper cataloguing the diversity and distribution of synapse sub-types across the entire mouse brain,” wrote neurogeneticist Dr. Kevin Mitchell. It “highlights [the] fact that synapses are the key computational elements in the nervous system.”
Amazing New Brain Map of Every Synapse Points to the Roots of Thinking
Imagine a map of every single star in an entire galaxy. A map so detailed that it lays out what each star looks like, what they’re made of, and how each star is connected to another through the grand physical laws of the cosmos.
While we don’t yet have such an astronomical map of the heavens, thanks to a momentous study published last week in Neuron, there is now one for the brain.
Using genetically modified mice, the team literally made each synapse light up under fluorescent light throughout the brain like the starry night. And similar to the way stars differ, the team found that synapses vastly varied, but in striking patterns that may support memory and thinking.
The detailed maps revealed a fundamental law of brain activity. With the help of machine learning, the team categorized roughly one billion synapses across the brain into 37 sub-types. Here’s the kicker: when sets of neurons receive electrical information, such as trying to decide between different solutions for a problem, unique sub-types of synapses spread out among different neurons unanimously spark with activity.
In other words: synapses come in types. And each type may control a thought, a decision, or a memory.
In a world that is now overwhelmingly urban and where many parts of the developed world have more people over 65 than under 15 - we have to re-imagine how we create conditions of community. This is one vision for the ‘new elders’ - but it could just as well be a multi-generational approach.
Co-housing is growing in popularity with young and old around the world. There are 165 co-housing communities in the US and another 140 in the planning stages. In the UK, Older Women’s Co-Housing (OWCH) in north London has a long waiting list and there are 20 other established co-housing communities, and 40 in development.
Happy together: lonely baby boomers turn to co-housing
For older people, co-housing offers a sense of community without losing independence
… among the artist studios and loft apartments on the colourful streets of Jingletown, Oakland, Mark found an answer: the Phoenix Commons, a co-housing community for over-55s. Residents own their modest homes while sharing spacious communal areas, including a kitchen, movie room and a hot tub. The community is self-managed, residents work together in committees and each night a group volunteers cooks for everyone.
The four-storey complex of 41 units is designed to foster a sense of community. Apartment windows face each other and walkways create a visible sense of life and movement.
This is a significant signal for what will likely become ubiquitous - the use of facial recognition for security and identity verification - but also soon to get the emotional pulse of a workforce or a citizenry and soon after as an aid to health maintenance.
Tech Mahindra adopts facial recognition to mark attendance
Facial recognition could soon jump from your smartphone to your workplace with employers using it to mark attendance and gauge the mood of the workforce.
Tech Mahindra, the fifth largest Indian IT services company by revenue, has launched a facial-recognition system for employees at its Noida office. Employees can now mark their attendance using swipe cards or they can simply look into the new facial-recognition terminal and sign in.
Harshvendra Soin, chief people officer at Tech Mahindra, said the tool would help the organisation measure the mood of employees when they use the system through a functionality called “moodometer”.
The system can recognise nine different facial expressions. It captures the expression on the face of the employees every time he or she uses the system and consolidates the information from all employees daily to create a moodometer score, which reflects the mood of the workforce.
Here is another signal of the emerging integration of AI and surveillance and perhaps if we pay attention - creating conditions for Reciprocal Accountability (who better to watch the watchers then the watched) - this could be an excellent use of distributed ledger technologies.
“While certain information may need to stay secret for an investigation to be done properly, some details have to be revealed for accountability to even be possible,” says graduate student Jonathan Frankle, one of the researchers on the team, in a statement. “This work is about using modern cryptography to develop creative ways to balance these conflicting issues.”
MIT’s tool for tracking police surveillance: a cryptographic ledger
Scientists at the Massachusetts Institute of Technology have proposed a cryptographically powered system they say could help the public track court orders that let law enforcement access people’s digital data without disclosing too much information.
The system, dubbed Accountability of Unreleased Data for Improved Transparency, or AUDIT, would create a digital ledger of data requests where prosecutors would agree to make their requests public at a later date, assuming court approval. Right now, many court orders approving access to cloud data are designed to be only temporarily sealed, but prosecutors and judges often don’t go back to unseal them once cases are resolved, the scientists say. Digitally committing to unseal the requests would let members of the public track whether the documents are, in fact, later made public.
The augmenting of human capacity with AI continues to advance - although not ready for primetime.
The work is the result of a multiyear collaboration between the three institutions. And while the software is not ready for clinical use, it could be deployed in hospitals in a matter of years.
DeepMind’s AI can detect over 50 eye diseases as accurately as a doctor
The system analyzes 3D scans of the retina and could help speed up diagnoses in hospitals
Step by step, condition by condition, AI systems are slowly learning to diagnose disease as well as any human doctor, and they could soon be working in a hospital near you. The latest example is from London, where researchers from Google’s DeepMind subsidiary, UCL, and Moorfields Eye Hospital have used deep learning to create software that identifies dozens of common eye diseases from 3D scans and then recommends the patient for treatment.
The work is the result of a multiyear collaboration between the three institutions. And while the software is not ready for clinical use, it could be deployed in hospitals in a matter of years. Those involved in the research described is as “ground-breaking.” Mustafa Suleyman, head of DeepMind Health, said in a press statement that the project was “incredibly exciting” and could, in time, “transform the diagnosis, treatment, and management of patients with sight threatening eye conditions [...] around the world.”
The software, described in a paper published in the journal Nature Medicine, is based on established principles of deep learning, which uses algorithms to identify common patterns in data. In this case, the data is 3D scans of patients’ eyes made using a technique known as optical coherence tomography, or OCT. Creating these scans takes around 10 minutes and involves bouncing near-infrared light off of the interior surfaces of the eye. Doing so creates a 3D image of the tissue, which is a common way to assess eye health. OCT scans are a crucial medical tool, as early identification of eye disease often saves the patient’s sight.
This is a fascinating signal related to the emergence of new scientific methods based on massive data and AI.
This leaves open the possibility that there are exotic particles that produce signatures no one has thought of — something that general searches have a better chance of finding.
LHC physicists embrace brute-force approach to particle hunt
The world’s most powerful particle collider has yet to turn up new physics — now some physicists are turning to a different strategy.
A once-controversial approach to particle physics has entered the mainstream at the Large Hadron Collider (LHC). The LHC’s major ATLAS experiment has officially thrown its weight behind the method — an alternative way to hunt through the reams of data created by the machine — as the collider’s best hope for detecting behaviour that goes beyond the standard model of particle physics. Conventional techniques have so far come up empty-handed.
So far, almost all studies at the LHC — at CERN, Europe’s particle-physics laboratory near Geneva, Switzerland — have involved ‘targeted searches’ for signatures of favoured theories. The ATLAS collaboration now describes its first all-out ‘general’ search of the detector’s data, in a preprint posted on the arXiv server1 last month and submitted to European Physics Journal C. Another major LHC experiment, CMS, is working on a similar project.
“My goal is to try to come up with a really new way to look for new physics” — one driven by the data rather than by theory, says Sascha Caron of Radboud University Nijmegen in the Netherlands, who has led the push for the approach at ATLAS. General searches are to the targeted ones what spell checking an entire text is to searching that text for a particular word. These broad searches could realize their full potential in the near future, when combined with increasingly sophisticated artificial-intelligence (AI) methods.
LHC researchers hope that the methods will lead them to their next big discovery — something that hasn’t happened since the detection of the Higgs boson in 2012, which put in place the final piece of the standard model.
This is an interesting signal beyond the issue of space - but of mobility and capability anywhere.
One Small Step Toward Printing Replacement Organs...in Space
Allevi and Made in Space want to send a bioprinter to the International Space Station
a new partnership between the bioprinter company Allevi and Made in Space, a company with two 3D printers aboard the International Space Station (ISS), represents the first step toward those scenes of sci-fi medicine.
Allevi has designed the ZeroG bio extruder, which is capable of printing biomaterials in microgravity, and the two companies have worked together to ensure that it slots easily into a Made in Space printer. “We want it to be astronaut plug-and-play,” says Ricky Solorzano, CEO of Allevi (the company formerly known as BioBots).
The ZeroG is just a proof of concept, but both companies seem confident that they can get the extruder up to the ISS in the near future. “We have a lot of experience with biomaterials, and they know how to get things to space,” says Solorzano.
This is very interesting - marshaling biology into the Internet-of-Things and a sensor-network.
MIT scientists got spinach to send them an email
Not only is it a superfood, as Popeye cartoons have clearly demonstrated, but it can be a superhero, when imbued with sensors to detect explosives.
MIT engineers recently accomplished this, allowing a spinach plant to send an email to a smartphone when it encountered nitroaromatics—chemical compounds found in many types of explosives. The plants roots picked up this compound, which was sent to the leaves, where nanotube sensors triggered a signal.
Two sets of nanocarbon tubes were used. MIT scientists employed a technique known as vascular infusion to place them into the underbelly of the plant’s leaves, within a layer known as the mesophyll—where photosynthesis takes place. One set of nanotubes sent out a constant fluorescent signal. This gave the computer monitoring it a baseline. The other sent a signal when it encountered the target molecule. This makes it easier for the computer to differentiate between the plants normal state and chemical detection.
After encountering the explosive compound at its roots, it took approximately 10 minutes to reach the leaves. Once detection took place, nanotubes send out a wireless fluorescent signal, picked up by an infrared camera. That camera relayed the message to an attached computer, about the size of a smartphone that sent an email alert. The monitoring system works for up to a meter away. Now researchers are working out increasing its range.
And more on how domestication of DNA (extending the domestication of plants, animals, etc.) can transform agriculture.
‘Green revolution’ crops bred to slash fertilizer use
Researchers have identified a molecule that increases plant growth while reducing the need for nitrogen.
A gene that enhances plants’ ability to absorb nitrogen could be used to breed high-yield varieties of rice, wheat and other staple crops that would need less fertilizer, researchers report on 15 August in Nature1. That could slash costs for farmers worldwide, and help to limit the environmental damage that occurs when nitrogen-rich water and soil wash from farm fields into rivers and oceans.
The research focused on crops bred during the ‘green revolution’ of the 1960s, a period when agricultural scientists boosted yields by breeding smaller, hardier versions of common crops. Farmers used these alongside improved irrigation methods, strong pesticides and efficient fertilizers. That sent the global harvest of cereal crops soaring from 741 million tonnes in 1961 to 1.62 billion in 1985.
But the latest study shows that there is still room for improvement, says Kathryn Barton, a plant scientist at the Carnegie Institution for Science in Stanford, California. “If you thought that these green-revolution varieties were it — that they’re the end of the line — you’re wrong, because there is more we can do,” she says.
And another important signal in the progress of DNA domestication.
Researchers might also be able to more easily temper the dark side of wheat. Many people are allergic to glutens and other wheat proteins, leading to disorders like celiac disease, baker’s asthma, and non-celiac wheat sensitivity. Scientists have managed to identify many of the specific proteins responsible, “but until now, we couldn’t determine the genes that encoded those proteins,” says Odd-Arne Olsen from the Norwegian University of Life Sciences. His team has now identified 356 such genes. Of these, 127 are new to science, and 222 were known but had been incorrectly sequenced.
Scientists Finally Crack Wheat’s Absurdly Complex Genome
Their efforts will make it much easier to breed new varieties of the world’s most important crop.
Scientists decoded the genome of rice in 2002. They completed the soybean genome in 2008. They mapped the maize genome in 2009. But only now has the long-awaited wheat genome been fully sequenced. That delay says nothing about wheat’s importance. It is arguably the most critical crop in the world. It’s grown on more land than anything else. It provides humanity with a fifth of our calories. But it also has one of the most complex genomes known to science.
For a start, wheat’s genome is monstrously big. While the genome of Arabidopsis—the first plant to be sequenced—contains 135 million DNA letters, and the human genome contains 3 billion, bread wheat has 16 billion. Just one of wheat’s chromosomes—3B—is bigger than the entire soybean genome.
To make things worse, the bread-wheat genome is really three genomes in one. About 500,000 years ago, before humans even existed, two species of wild grass hybridized with each other to create what we now know as emmer wheat. After humans domesticated this plant and planted it in their fields, a third grass species inadvertently joined the mix. This convoluted history has left modern bread wheat with three pairs of every chromosome, one pair from each of the three ancestral grasses. In technical lingo, that’s a hexaploid genome. In simpler terms, it’s a gigantic pain in the ass.
This is a very important Must Read signal - although there will likely be many who are concerned about the spread of knowledge into Do It Yourself approaches - there are many equally concerned that science knowledge has been privatized for undue profits.
At the pharmacy, a pair of single use Mylan epipens can cost over $600 and the company’s generic version costs $300 per pair, but an ongoing shortage means you probably can’t find them, even if you can afford them. In response, Four Thieves published the instructions for a DIY epipen online that can be made for $30 in off-the-shelf parts and reloaded for $3. Shkreli drove the price of the lifesaving HIV medicine Daraprim sells up to $750 per pill. So Four Thieves developed an open source portable chemistry lab that allows anyone to manufacture their own Daraprim for just 25 cents apiece.
Meet the Anarchists Making Their Own Medicine
The Four Thieves Vinegar Collective is a network of tech-fueled anarchists taking on Big Pharma with DIY medicines.
The first time I encountered Michael Laufer, he was throwing thousands of dollars worth of homemade medicine into a packed audience at Hackers on Planet Earth (HOPE), a biennial conference in New York City.
“Does anyone here suffer from anaphylactic shock and not have access to epinephrine?” Laufer asked the audience. A few hands went up and Laufer stuffed a homemade EpiPen into one of them. “That’s one of the original ones we made,” he said. “Use it well.”
After a few minutes of gloating about pharma bro Martin Shkreli “rotting at Fort Dix” for raising the price of Daraprim, a lifesaving HIV medicine, from $13 to $750, Laufer grew serious. “It’s been two years, but despite everything that’s happened, the price of Daraprim hasn’t changed,” he said. He reached into his pocket and produced a handful of white pills. “I guess I better hand out some more,” Laufer said as he tossed the Daraprim into the audience.
With a shaved head, dark beard, and an ever-present camo jacket, Laufer doesn’t look like the type of person you’d seek out for medical advice—but that’s exactly his point. As the founding member of Four Thieves Vinegar, a volunteer network of anarchists and hackers developing DIY medical technologies, Laufer has spent the last decade working to liberate life-saving pharmaceuticals from the massive corporations that own them. Laufer has no formal training in medicine and he’ll be the first to tell you he’s not a doctor. In fact, from a regulatory standpoint he’s more qualified to do mathematical work on nuclear weapons than treat patients.
This is another important signal - we don’t have much time to develop an economic paradigm that is not based on scarcity and especially on manufactured artificial scarcity - to protect old business models and institutions.
Analyst: Renewable energy will be ‘effectively free’ by 2030
Analysts at Swiss investment bank UBS believe that by 2030, we could all be living without much of a carbon footprint — at least at home. The analysts believe that the cost of renewable energy will continue to dive heading into the next decade, and that by 2030, costs will be so low they will “effectively be free,” according to new research published this morning in the Financial Times (paywall).
The analysis explained that solar and wind farms are getting bigger a move that is “great news for the planet, and probably also for the economy.” With its increased popularity among consumers and, more importantly, energy providers, the economics of scale come into play. With declining prices, it makes little sense to ignore alternative energy sources, especially those that are renewable.
This is definitely a good signal of positive potential for handling our complex future.
China, World’s Biggest Polluter, Hits Carbon Goals—12 Years Early
The country may have hit the peak it promised in the Paris climate accord well before its 2030 timetable. But there’s still more work to do.
In a year when climate change is moving from abstract theory to grimly tangible reality, a faint dot of hope may be on the horizon.
China, the world’s largest source of planet-warming carbon emissions, may have hit the peak it promised in the Paris climate accord well before its 2030 timetable. That’s the conclusion reached by scientists who looked at the country’s estimated carbon output between 2007 and 2016, as the country’s rapid industrialization slowed and its consumption of coal declined. The research is published in the journal Nature Geoscience.
“They are able to manage quite significant economic growth, but have been able to stabilize their emissions over the past few years,” said Dabo Guan, a professor of climate change economics at the University of East Anglia in Britain.
My neighbor is a true motorcycle enthusiast - after trying an electric motorcycle he was thrilled with the experience of being ‘in nature’ as he rode in real silence.
How electric vehicles are moving into the fast lane
From battery-powered delivery trucks and rickshaws, to Formula E racing cars, electric vehicles are entering the mainstream. But will a lack of investment in charging infrastructure hold the market back?
and not just in the richer developed nations.
"We're about to cross four million electric vehicles on the road globally some time in August," says Colin McKerracher, head of advanced transport at Bloomberg New Energy Finance.
Of course, that's only a sliver of the total number of vehicles on the road, but it's steeply up from just 500,000 in 2014.
An awesome new wind energy generator - the video is only 1 min long - well worth the view.
Bladeless wind turbines
A newly designed wind turbine leaves behind a loud, noisy and sometimes dangerous traditional feature.
For anyone looking for free access to science knowledge here’s a good site.
Unpaywall - An open database of 19,657,674 free scholarly articles.
We harvest Open Access content from over 50,000 publishers and repositories, and make it easy to find, track, and use.
Used and trusted by top organizations
We're integrated into thousands of library systems, search platforms, and other information products worldwide. In fact, if you're involved in scholarly communication, there's a good chance you've already used Unpaywall data.
This is a Nature article discussing Unpaywall.
“Unpaywall is a ground-breaking development,” says Alberto Martín-Martín, who studies bibliometrics and science communication at the University of Grenada in Spain. “It takes us one step closer to achieving a true open research infrastructure.”
How Unpaywall is transforming open science
Unpaywall has become indispensable to many academics, and tie-ins with established scientific search engines could broaden its reach.
This free service locates open-access articles and presents paywalled papers that have been legally archived and are freely available on other websites to users who might otherwise have hit a paywalled version. Since one part of the technology was released in 2016, it has become indispensable for many researchers. And firms that run established scientific search engines are starting to take advantage of Unpaywall.
On 26 July, Elsevier announced plans to integrate Unpaywall into its Scopus database searches, allowing it to deliver millions more free-to-read papers to users than it does currently. Scopus’s embrace of Unpaywall, along with similar moves by other search engines, means that much more open-access content is now at researchers’ fingertips. These deals are also enabling funders, librarians and others to study open-access publishing trends comprehensively for the first time.