Thursday, August 8, 2019

Friday Thinking 9 Aug 2019

Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning.
Work that engages our whole self becomes play that works.
Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  
In the 21st century - the planet is the little school house in the galaxy.
Citizenship is the battlefield of the 21st  Century

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content
Quotes:


Articles:





If we continue to operate in terms of a Cartesian dualism of mind versus matter, we shall probably also continue to see the world in terms of God versus man; elite versus people; chosen race versus others; nation versus nation; and man versus environment. It is doubtful whether a species having both an advanced technology and this strange way of looking at its world can endure.

Impossible choices

Gregory Bateson saw the creative potential of paradox


aspirin was discovered in 1897, and an explanation of how it works followed in 1995. That, in turn, has spurred some research leads on making better pain relievers through something other than trial and error.


This kind of discovery — answers first, explanations later — I call “intellectual debt.” We gain insight into what works without knowing why it works. We can put that insight to use immediately, and then tell ourselves we’ll figure out the details later. Sometimes we pay off the debt quickly; sometimes, as with aspirin, it takes a century; and sometimes we never pay it off at all.


Be they of money or ideas, loans can offer great leverage. We can get the benefits of money — including use as investment to produce more wealth — before we’ve actually earned it, and we can deploy new ideas before having to plumb them to bedrock truth.


Indebtedness also carries risks. For intellectual debt, these risks can be quite profound, both because we are borrowing as a society, rather than individually, and because new technologies of artificial intelligence — specifically, machine learning — are bringing the old model of drug discovery to a seemingly unlimited number of new areas of inquiry. Humanity’s intellectual credit line is undergoing an extraordinary, unasked-for bump up in its limit.


To understand the problems with intellectual debt despite its boon, it helps first to consider a sibling: engineering’s phenomenon of technical debt.

Intellectual Debt: With Great Power Comes Great Ignorance



“The bottom line was that there would be a supernova close enough to the Earth to drastically affect the ozone layer about once every billion years,” says Gehrels, who still works at Goddard. That’s not very often, he admits, and no threatening stars prowl the solar system today. But Earth has existed for 4.6 billion years, and life for about half that time, meaning the odds are good that a supernova blasted the planet sometime in the past. The problem is figuring out when. Because supernovas mainly affect the atmosphere, it’s hard to find the smoking gun,” Gehrels says.


Astronomers have searched the surrounding cosmos for clues, but the most compelling evidence for a nearby supernova comes—somewhat paradoxically—from the bottom of the sea. Here, a dull and asphalt black mineral formation called a ferromanganese crust grows on the bare bedrock of underwater mountains—incomprehensibly slowly. In its thin, laminated layers, it records the history of planet Earth and, according to some, the first direct evidence of a nearby supernova.


...Based on the concentration of Fe-60 in the crust, Knie estimated that the supernova exploded at least 100 light-years from Earth—three times the distance at which it could’ve obliterated the ozone layer—but close enough to potentially alter cloud formation, and thus, climate. While no mass-extinction events happened 2.8 million years ago, some drastic climate changes did take place—and they may have given a boost to human evolution. Around that time, the African climate dried up, causing the forests to shrink and give way to grassy savanna. Scientists think this change may have encouraged our hominid ancestors as they descended from trees and eventually began walking on two legs.


 as any young theory, is still speculative and has its opponents. Some scientists think Fe-60 may have been brought to Earth by meteorites, and others think these climate changes can be explained by decreasing greenhouse gas concentrations, or the closing of the ocean gateway between North and South America. But Knie’s new tool gives scientists the ability to date other, possibly more ancient, supernovas that may have passed in the vicinity of Earth, and to study their influence on our planet.

The Secret History of the Supernova at the Bottom of the Sea



This is a strong signal of change - but not just of the retail food business - but of the whole retail paradigm. A very long read (8K words) but illuminating for those interested in the future of business and work. - Reading the subtext - are marketers drinking their own Kool-Aid? Isn’t there a dimension of not just ‘fake news in this approach to the gamification of faux-experiences - all in the name of consuming more?
Forget branding. Forget sales. Kelley’s main challenge is redirecting the attention of older male executives, scared of the future and yet stuck in their ways, to the things that really matter.
I make my living convincing male skeptics of the power of emotions,” he says.

The Man Who’s Going to Save Your Neighborhood Grocery Store

American food supplies are increasingly channeled through a handful of big companies: Amazon, Walmart, FreshDirect, Blue Apron. What do we lose when local supermarkets go under? A lot — and Kevin Kelley wants to stop that.
Part square-jawed cattle rancher, part folksy CEO, Niemann is the last person you’d expect to ask for a fresh start. He’s spent his whole life in the business, transforming the grocery chain his grandfather founded in 1917 into a regional powerhouse with more than 100 supermarkets and convenience stores across four states. In 2014, he was elected chair of the National Grocery Association. It’s probably fair to say no one alive knows how to run a grocery store better than Rich Niemann. Yet Niemann was no longer sure the future had a place for stores like his.


He was right to be worried. The traditional American supermarket is dying. It’s not just Amazon’s purchase of Whole Foods, an acquisition that trade publication Supermarket News says marked “a new era” for the grocery business — or the fact that Amazon hopes to launch a second new grocery chain in 2019, according to a recent report from The Wall Street Journal, with a potential plan to scale quickly by buying up floundering supermarkets. Even in plush times, grocery is a classic “red ocean” industry, highly undifferentiated and intensely competitive. (The term summons the image of a sea stained with the gore of countless skirmishes.) Now, the industry’s stodgy old playbook — “buy one, get one” sales, coupons in the weekly circular — is hurtling toward obsolescence. And with new ways to sell food ascendant, legacy grocers like Rich Niemann are failing to bring back the customers they once took for granted. You no longer need grocery stores to buy groceries.


….Harvest Market is the anti-Amazon. It’s designed to excel at what e-commerce can’t do: convene people over the mouth-watering appeal of prize ingredients and freshly prepared food. The proportion of groceries sold online is expected to swell over the next five or six years, but Harvest is a bet that behavioral psychology, spatial design, and narrative panache can get people excited about supermarkets again. Kelley isn’t asking grocers to be more like Jeff Bezos or Sam Walton. He’s not asking them to be ruthless, race-to-the-bottom merchants. In fact, he thinks that grocery stores can be something far greater than we ever imagined — a place where farmers and their urban customers can meet, a crucial link between the city and the country.


But first, if they’re going to survive, Kelley says, grocers need to start thinking like Alfred Hitchcock.

Here’s a signal of proliferation of curiosities when more people have access to the means of expression. There is a 10 min, a 17 min and a 3 min example videos for those curious about curiosities.
Mukbang, a portmanteau of the Korean words for “eating” and “broadcasting,” first popped up on Korean live-streaming sites like Afreeca in 2010. Mukbang hosts provide company for viewers dining alone and, in some cases, act as their avatars, eating whatever the audience wanted them to eat.

IN THE COUNTRY WHERE MUKBANG WAS INVENTED, THIS YOUTUBER IS PUSHING THE GENRE

ASMR + mukbang + supersized foods = content gold
If the point of mukbang is to provide viewers with the secondhand satisfaction of watching someone else eat delicious food, then the ASMR mukbang videos of South Korean YouTuber Yammoo satisfy a craving you didn’t even know you had. Toward the stranger spectrum of the “food” she’s eaten: stones (made of chocolate) or light bulbs (made of candy). The video where she eats balls of air pollution (made of cotton candy) is a perfect example of how the creator is pushing the decade-old mukbang genre into its next evolution.


With air pollution levels at a record high, Seoul citizens have had to incorporate face masks, air quality apps, and home air purifiers into their daily lives. “This is a mukbang everyone can join in together,” Yammoo tells her viewers. “If you’re home, open your windows — if you’re outside, you’re already participating.” She removed her black face mask and began eating the grayish cotton candy balls, her chewing sounds amplified by the microphone. “The air quality levels were good today because you ate all of the pollution ^^♥,” wrote a commenter.

It is a truism of our times that life-long learning is no longer a luxury of those with both the intrinsic motivation and leisurely opportunity to fulfill that motivation. It is a requirement of an employable life-long career. But what most of us forget is the high degree of unlearning that is also required to embrace accelerating change. 
Maybe that is one of the key uses of 'narrative' in creating cultural coherence in the face of change & forgetting  - and key to our sense of a self continuity in the face of ephemeral memory.
Much is still unknown about how memories are created and accessed, and addressing such mysteries has consumed a lot of memory researchers’ time. How the brain forgets, by comparison, has been largely overlooked. 


… its dynamic nature is not a flaw but a feature, Frankland says — something that evolved to aid learning. The environment is changing constantly and, to survive, animals must adapt to new situations. Allowing fresh information to overwrite the old helps them to achieve that.its dynamic nature is not a flaw but a feature, Frankland says — something that evolved to aid learning. The environment is changing constantly and, to survive, animals must adapt to new situations. Allowing fresh information to overwrite the old helps them to achieve that.

The forgotten part of memory

Long thought to be a glitch of memory, researchers are coming to realize that the ability to forget is crucial to how the brain works.
Memories make us who we are. They shape our understanding of the world and help us to predict what’s coming. For more than a century, researchers have been working to understand how memories are formed and then fixed for recall in the days, weeks or even years that follow. But those scientists might have been looking at only half the picture. To understand how we remember, we must also understand how, and why, we forget.


Until about ten years ago, most researchers thought that forgetting was a passive process in which memories, unused, decay over time like a photograph left in the sunlight. But then a handful of researchers who were investigating memory began to bump up against findings that seemed to contradict that decades-old assumption. They began to put forward the radical idea that the brain is built to forget.


A growing body of work, cultivated in the past decade, suggests that the loss of memories is not a passive process. Rather, forgetting seems to be an active mechanism that is constantly at work in the brain. In some — perhaps even all — animals, the brain’s standard state is not to remember, but to forget. And a better understanding of that state could lead to breakthroughs in treatments for conditions such as anxiety, post-traumatic stress disorder (PTSD), and even Alzheimer’s disease.

This is an interesting signal of a potential new occupation in the world of Big Data and algorithms - Data Detective - or Data Vigilante Hero - Dat Man.
“His technique has been shown to be incredibly useful,” says Paul Myles, director of anaesthesia and perioperative medicine at the Alfred hospital in Melbourne, Australia, who has worked with Carlisle to examine research papers containing dodgy statistics. “He’s used it to demonstrate some major examples of fraud.”
Carlisle believes that he is helping to protect patients, which is why he spends his spare time poring over others’ studies. “I do it because my curiosity motivates me to do so,” he says, not because of an overwhelming zeal to uncover wrongdoing: “It’s important not to become a crusader against misconduct.”

How a data detective exposed suspicious medical trials

Anaesthetist John Carlisle has spotted problems in hundreds of research papers — and spurred a leading medical journal to change its practice.
By day, Carlisle is an anaesthetist working for England’s National Health Service in the seaside town of Torquay. But in his spare time, he roots around the scientific record for suspect data in clinical research. Over the past decade, his sleuthing has included trials used to investigate a wide range of health issues, from the benefits of specific diets to guidelines for hospital treatment. It has led to hundreds of papers being retracted and corrected, because of both misconduct and mistakes. And it has helped to end the careers of some large-scale fakers: of the six scientists worldwide with the most retractions, three were brought down using variants of Carlisle’s data analyses.


Together with the work of other researchers who doggedly check academic papers, his efforts suggest that the gatekeepers of science — journals and institutions — could be doing much more to spot mistakes. In medical trials, the kind that Carlisle focuses on, that can be a matter of life and death.


This is a good signal in the emerging development of AI. For about a decade protein folding research has taken advantage of crowdsourcing by using humans to engage in a game called FoldIt. Well that may not have another decade left.

AI protein-folding algorithms solve structures faster than ever

Deep learning makes its mark on protein-structure prediction.
The race to crack one of biology’s grandest challenges — predicting the 3D structures of proteins from their amino-acid sequences — is intensifying, thanks to new artificial-intelligence (AI) approaches.


At the end of last year, Google’s AI firm DeepMind debuted an algorithm called AlphaFold, which combined two techniques that were emerging in the field and beat established contenders in a competition on protein-structure prediction by a surprising margin. And in April this year, a US researcher revealed an algorithm that uses a totally different approach. He claims his AI is up to one million times faster at predicting structures than DeepMind’s, although probably not as accurate in all situations.


The latest algorithm’s creator, Mohammed AlQuraishi, a biologist at Harvard Medical School in Boston, Massachusetts, hasn’t yet directly compared the accuracy of his method with that of AlphaFold — and he suspects that AlphaFold would beat his technique in accuracy when proteins with sequences similar to the one being analysed are available for reference. But he says that because his algorithm uses a mathematical function to calculate protein structures in a single step — rather than in two steps like AlphaFold, which uses the similar structures as groundwork in the first step — it can predict structures in milliseconds rather than hours or days.


Technical difficulties meant AlQuraishi’s algorithm did not perform well at CASP13. He published details of the AI in Cell Systems in April1 and made his code publicly available on GitHub, hoping others will build on the work. (The structures for most of the proteins tested in CASP13 have not been made public yet, so he still hasn’t been able to directly compare his method with AlphaFold.)

And more about using AI to enhance human research capacity.
"This confirmed that SAM not only had the ability to identify good drugs but in fact had come up with better human immune drugs than currently exist," 
Petrovsky said this potentially shortens the normal drug discovery and development process by decades and saves hundreds of millions of dollars.

Australian Researchers Have Just Released The World's First AI-Developed Vaccine

A team at Flinders University in South Australia has developed a new vaccine believed to be the first human drug in the world to be completely designed by artificial intelligence (AI).


While drugs have been designed using computers before, this vaccine went one step further being independently created by an AI program called SAM (Search Algorithm for Ligands).


Flinders University Professor Nikolai Petrovsky who led the development told Business Insider Australia its name is derived from what it was tasked to do: search the universe for all conceivable compounds to find a good human drug (also called a ligand).


"We had to teach the AI program on a set of compounds that are known to activate the human immune system, and a set of compounds that don't work. The job of the AI was then to work out for itself what distinguished a drug that worked from one that doesn't," Petrovsky said, who is also the Research Director of Australian biotechnology company Vaxine.

This is an interesting signal of how to potentially protect people from HIV - but also a new paradigm of vaccine. 

This implant could prevent HIV infection

A tiny implant may prevent a person from getting HIV for a year, reports the New York Times.
The implant: It’s a plastic tube the size of a matchstick that slowly releases an anti-HIV drug. It would be placed under the skin of the arm.


HIV prevention: Even if you don't have the virus, taking anti-HIV drugs daily can stop you from getting infected. Such "pre-exposure prophylaxis," or PrEP, is for people at high risk of getting the virus, according to the Centers for Disease Control and Prevention. 


Set and forget: The idea behind the implant is that it would make PrEP easier. Because it would release an antiviral drug little by little over months, people would not have to remember to swallow pills. It’s based on a similar implant for birth control.


The evidence: The device is being developed by Merck, which carried out a three-month-long preliminary test in just 12 men. It contains an experimental, but long-acting, anti-HIV drug called islatravir. The company presented the prototype today at an HIV science conference in Mexico City. 

This is a fascinating weak signal related to the domestication of DNA (and thus also of proteomics) and the alleviation of allergies. 

Giving cats food with an antibody may help people with cat allergies

Pet-food maker Purina is studying how adding an antibody to the chow curbs reactions
Cat lovers who sneeze and sniffle around their feline friends might one day find at least partial relief in a can of cat food.


New research suggests that feeding cats an antibody to the major allergy-causing protein in cats renders some of the protein, called Fel d1, unrecognizable to the human immune system, reducing an allergic response. After 105 cats were fed the antibody for 10 weeks, the amount of active Fel d1 protein on the cats’ hair dropped by 47 percent on average, researchers from pet food–maker Nestlé Purina report in the June Immunity, Inflammation and Disease.


And in a small pilot study, 11 people allergic to cats experienced substantially reduced nasal symptoms and less itchy, scratchy eyes when exposed in a test chamber to hair from cats fed the antibody diet, compared with cats fed a control diet. The preliminary findings were presented in Lisbon, Portugal at the European Academy of Allergy and Clinical Immunology Congress in June.


The Fel d1 protein is produced in cats’ salivary and sebaceous glands. Cats transfer the protein to their hair when they groom by licking themselves and excrete it in their urine. Humans are then exposed to it on cat hair and dander — dead skin — or in the litter box. Cat allergies plague up to 20 percent of people, and Fel d1 is responsible for 95 percent of allergic reactions to cats.

An excellent signal of the continuing development of other computational paradigms. There is a 3 min video explanation as well.
With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware. Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time.

Brains scale better than CPUs. So Intel is building brains

The new Pohoiki Beach builds on the 2017 success of Intel's Loihi NPU.
Neuromorphic engineering—building machines that mimic the function of organic brains in hardware as well as software—is becoming more and more prominent. The field has progressed rapidly, from conceptual beginnings in the late 1980s to experimental field programmable neural arrays in 2006, early memristor-powered device proposals in 2012, IBM's TrueNorth NPU in 2014, and Intel's Loihi neuromorphic processor in 2017. Yesterday, Intel broke a little more new ground with the debut of a larger-scale neuromorphic system, Pohoiki Beach, which integrates 64 of its Loihi chips.


Where traditional computing works by running numbers through an optimized pipeline, neuromorphic hardware performs calculations using artificial "neurons" that communicate with each other. This is a workflow that's highly specialized for specific applications, much like the natural neurons it mimics in function—so you likely won't replace conventional computers with Pohoiki Beach systems or its descendants, for the same reasons you wouldn't replace a desktop calculator with a human mathematics major.


However, neuromorphic hardware is proving able to handle tasks organic brains excel at much more efficiently than conventional processors or GPUs can. Visual object recognition is perhaps the most widely realized task where neural networks excel, but other examples include playing foosball, adding kinesthetic intelligence to prosthetic limbs, and even understanding skin touch in ways similar to how a human or animal might understand it.


Loihi, the underlying chip Pohoiki Beach is integrated from, consists of 130,000 neuron analogs—hardware-wise, this is roughly equivalent to half of the neural capacity of a fruit fly. Pohoiki Beach scales that up to 8 million neurons—about the neural capacity of a zebrafish. But what's perhaps more interesting than the raw computational power of the new neural network is how well it scales.


Here is a 17 min video explain neuromorphic computing

What Is Neuromorphic Computing (Cognitive Computing)

we'll discuss, what cognitive computing is, more specifically – the difference between current computing Von Neuman architecture and more biologically representative neuromorphic architecture and how these two paired together will yield massive performance and efficiency gains!
Following that we’ll discuss, the benefits of cognitive computing systems further as well as current cognitive computing initiatives, TrueNorth and Loihi.
To conclude we’ll extrapolate and discuss the future of cognitive computing in terms of brain simulation, artificial intelligence and brain-computer interfaces!

This is a weak signal - but fascinating - given the increasing interest in the hard problem of consciousness and the nature of the universe.

The Strange Similarity of Neuron and Galaxy Networks

Your life’s memories could, in principle, be stored in the universe’s structure.
Christof Koch, a leading researcher on consciousness and the human brain, has famously called the brain “the most complex object in the known universe.” It’s not hard to see why this might be true. With a hundred billion neurons and a hundred trillion connections, the brain is a dizzyingly complex object.


But there are plenty of other complicated objects in the universe. For example, galaxies can group into enormous structures (called clusters, superclusters, and filaments) that stretch for hundreds of millions of light-years. The boundary between these structures and neighboring stretches of empty space called cosmic voids can be extremely complex.1 Gravity accelerates matter at these boundaries to speeds of thousands of kilometers per second, creating shock waves and turbulence in intergalactic gases. We have predicted that the void-filament boundary is one of the most complex volumes of the universe, as measured by the number of bits of information it takes to describe it.


This got us to thinking: Is it more complex than the brain?
So we—an astrophysicist and a neuroscientist—joined forces to quantitatively compare the complexity of galaxy networks and neuronal networks. The first results from our comparison are truly surprising: Not only are the complexities of the brain and cosmic web actually similar, but so are their structures. The universe may be self-similar across scales that differ in size by a factor of a billion billion billion.

Another signal in the growing body of research indicating the importance of our microbial profile for physical and mental health.

Boosting a gut bacterium helps mice fight an ALS-like disease

People with Lou Gherig's disease appear to have a dearth of the microbes
Mice that develop a degenerative nerve disease similar to amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, fared better when bacteria making vitamin B3 were living in their intestines, researchers report July 22 in Nature. Those results suggest that gut microbes may make molecules that can slow progression of the deadly disease.


The researchers uncovered clues that the mouse results may also be important for people with ALS. But the results are too preliminary to inform any changes in treating the disease, which at any given time affects about two out of every 100,000 people, or about 16,000 people in the United States, says Eran Elinav, a microbiome researcher at the Weizmann Institute of Science in Rehovot, Israel.


Microbiomes of ALS mice contained almost no Akkermansia muciniphila bacteria. Restoring A. muciniphila in the ALS mice slowed progression of the disease, and the mice lived longer than untreated rodents. By contrast, greater numbers of two other normal gut bacteria, Ruminococcus torques and Parabacteroides distasonis, were associated with more severe symptoms.

This is an excellent signal of the positive potential of domesticating DNA.

This is the first fungus known to host complex algae inside its cells

It’s unclear if the newly discovered alliance exists in the wild
A soil fungus and a marine alga have formed a beautiful friendship.


In a lab dish, scientists grew the fungus Mortierella elongata with a photosynthetic alga called Nannochloropsis oceanica. This odd couple formed a mutually beneficial team that kept each other going when nutrients such as carbon and nitrogen were scarce, researchers report July 16 in eLife.


Surprisingly, after about a month together, the partners got even cozier. Algal cells began growing inside the fungi’s super long cells called hyphae — the first time that scientists have identified a fungus that can harbor eukaryotic algae inside itself. (In eukaryotic cells, DNA is stored in the nucleus.) In lichens, a symbiotic pairing of fungi and algae, the algae remain outside of the fungal cells. 


In the new study, biochemist Zhi-Yan Du of Michigan State University in East Lansing and his colleagues used heavy forms of carbon and nitrogen to trace the organisms’ nutrient exchange. The fungi passed more than twice as much nitrogen to their algal partners as the algae sent to the fungi, the team found. And while both partners lent each other carbon, the algal cells had to touch the fungi’s hyphae cells to make their carbon deliveries.


It’s unclear if the newly discovered alliance exists in the wild. But both N. oceanica and M. elongata are found around the world, and could interact in places such as tidal zones. Learning more about how the duo teams up may shed light on how symbiotic partnerships evolve.

An excellent signal of the emergence of new methodologies in biological research by leveraging AI and other technologies. There is an excellent 2 min video explanation as well.
What’s so hard about seeing inside a living cell?
If you want to look at a cell when it’s alive, there are basically two limitations. We can blast the cell with laser light to get these [fluorescent protein] labels to illuminate. But that laser light is phototoxic — the cell is just basically baking in the sun in the desert.
The other limitation is that these labels are attached to an original protein in the cell that needs to go somewhere and do things. But the protein now has this big stupid fluorescent molecule attached to it. That might change the way the cell works if I have too many labels. Sometimes when you try to introduce these fluorescent labels, your experiment just doesn’t work out. Sometimes, the labels are lethal to the cell.

His Artificial Intelligence Sees Inside Living Cells

Instead of performing expensive fluorescence imaging experiments, scientists can use this “label-free determination” to efficiently assemble high-fidelity, three-dimensional movies of the interiors of living cells.


The data can also be used to build a biologically accurate model of an idealized cell — something like the neatly labeled diagram in a high school textbook but with greater scientific accuracy. That’s the goal of the institute’s project.


Johnson’s use of machine learning to visualize cell interiors began in 2010 at Carnegie Mellon University, just before a series of breakthroughs in deep learning technology began to transform the field of artificial intelligence. Nearly a decade later, Johnson thinks that his AI-augmented approach to live cell imaging could lead to software models that are so accurate, they reduce or even altogether eliminate the need for certain experiments. “We want to be able to take the cheapest image [of a cell] that we can and predict from that as much about that cell as we possibly can,” he said. “How is it organized? What’s the gene expression? What are its neighbors doing? For me, [label-free determination] is just a prototype for much more sophisticated things to come.”

Thursday, August 1, 2019

Friday Thinking 2 Aug 2019

Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning.
Work that engages our whole self becomes play that works.
Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  
In the 21st century - the planet is the little school house in the galaxy.
Citizenship is the battlefield of the 21st  Century

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9



Content
Quotes:


Articles:



The nature of work is in a constant process of change. In a time of rapid innovation, the expectations for high-quality digital services seem to be outgrowing the budgets that fuel the provision of these services. The lack of resources in combination with growing demands leads to a higher workload for the people who make it all possible. In reality, the human workforce is being robbed of valuable time as they often perform monotonous tasks that could already be delegated to software robots. Flowit, the Estonian company providing work process automation solutions, has realised this and is determined to be the catalyst for change.

e-Talks: The future of work with Flowit CEO Andres Aavik



“Stem cells are something people have been working on for years” in studies of development, wound healing and cancer, Ruiz-Trillo said. Now, it’s becoming clear that they will be “interesting for understanding evolution as well,” for discovering how animals came to be.

Scientists Debate the Origin of Cell Types in the First Animals



without any map – digital or paper – navigation is often an unsettling, time-consuming challenge. Sometimes, it’s even a nightmare, and innumerable lives have been lost when navigation failed.


For the bulk of our evolution, before we could be considered ‘human’, our navigational abilities relied on using our sense organs. We take it for granted that we can see our way with our eyes. But we also have other senses that we can use to orient ourselves – more than ‘six’ if we include the vestibular system, which underlies our ability to balance, and proprioception, our sense of bodily articulation and movement. Yet, we seem so unlike nonhuman animals, who tap a host of alternative senses to find their way: bees see ultraviolet light, sharks sense electrical patterns, and bats echolocate.


When no other sensory aid is available, some nonhuman animals can also guide themselves using Earth’s magnetic field. Our planet is an enormous magnet, an object whose internal electrical charge causes it to be positive at one end and negative at the other. This means that Earth – like other, smaller magnets – can physically align a compass needle towards its North and South poles, a property known as polarity. The pull of a magnet is represented, in textbooks, by lines of force that predict where, precisely, the needle will point. But it’s nuanced: the force lines shift with what scientists call ‘inclination’ and ‘declination’, pointing towards Earth with increasing or decreasing angles to the horizontal plane, depending on how far or near the observer is to each pole. Arguably, these properties offer far superior navigational cues relative to your smartphone, which can break, malfunction or, ironically, become lost.


When no other sensory aid is available, some nonhuman animals can also guide themselves using Earth’s magnetic field. 

Human magnetism



McLuhan urged us to think ahead. "Control over change would seem to consist in moving not with it but ahead of it. Anticipation gives the power to deflect and control force." By giving up our resistance and allowing our minds to travel ahead of the coming changes, McLuhan allowed some chance that we will rescue something of our humanity or invent something better to replace it.

The Wisdom of Saint Marshall, the Holy Fool





how might new economics move us beyond the mechanistic view of policy and regulation, and towards a view that takes into account the complexity, unpredictability, and reflexivity of the economy?


My view is that we must take a more deliberately evolutionary view of policy development. Rather than thinking of policy as a fixed set of rules or institutions engineered to address a particular set of issues, we should think of policy as an adapting portfolio of experiments that helps shape the evolution of the economy and society over time. 


New economics has the potential to significantly reframe these debates. It isn’t merely a matter of centrist compromise, of just splitting the difference. Rather it is a different frame that agrees with the right on some things, with the left on others, and neither on still other areas. For example, new economic work shows that Hayek was ahead of his time in his insights into the power of markets to self-organise, efficiently process information from millions of producers and consumers, and innovate. But new economic work also shows that Keynes was ahead of his time in his concerns about inherent instabilities in markets, the possibility that markets can fail to self-correct, and the need for the state to intervene when markets malfunction. Likewise, new economics research shows that humans are neither the selfish individualists of Hume nor the noble altruists of Rousseau, rather they are complex social creatures who engage in a never ending dance of cooperation and competition. Humans are what researchers such as Herb Gintis and Sam Bowles (2005) call ‘conditional co-operators and altruistic punishers’ – our cooperative instincts are strong and provide the basis for all organisation in the economy, but we also harshly punish cheaters and free-riders, and compete intensely for wealth and status.

How the Profound Changes in Economics Make Left Versus Right Debates Irrelevant



This is a very important signal of the future of the city and the community. We all need to re-imagine what a walkable life with more people over 65 than under 15. How can we create flourishing multi-generational, diverse (including cognitively diverse) multi-use communities?
But if big cities are shedding people, they’re growing in other ways—specifically, in wealth and workism. The richest 25 metro areas now account for more than half of the U.S. economy, according to an Axios analysis of government data. Rich cities particularly specialize in the new tech economy: Just five counties account for about half of the nation’s internet and web-portal jobs. Toiling to build this metropolitan wealth are young college graduates, many of them childless or without school-age children; that is, workers who are sufficiently unattached to family life that they can pour their lives into their careers.
In 2018, the U.S. fertility rate fell to its all-time low. Without sustained immigration, the U.S. could shrink for the first time since World World I. 

The Future of the City Is Childless

Last year, for the first time in four decades, something strange happened in New York City. In a non-recession year, it shrank.
We are supposedly living in the golden age of the American metropolis, with the same story playing out across the country. Dirty and violent downtowns typified by the “mean streets” of the 1970s became clean and safe in the 1990s. Young college graduates flocked to brunchable neighborhoods in the 2000s, and rich companies followed them with downtown offices.


New York is the poster child of this urban renaissance. But as the city has attracted more wealth, housing prices have soared alongside the skyscrapers, and young families have found staying put with school-age children more difficult. Since 2011, the number of babies born in New York has declined 9 percent in the five boroughs and 15 percent in Manhattan. (At this rate, Manhattan’s infant population will halve in 30 years.) In that same period, the net number of New York residents leaving the city has more than doubled. There are many reasons New York might be shrinking, but most of them come down to the same unavoidable fact: Raising a family in the city is just too hard. And the same could be said of pretty much every other dense and expensive urban area in the country.


In high-density cities like San Francisco, Seattle, and Washington, D.C., no group is growing faster than rich college-educated whites without children, according to Census analysis by the economist Jed Kolko. By contrast, families with children older than 6 are in outright decline in these places. In the biggest picture, it turns out that America’s urban rebirth is missing a key element: births.

If the future of the city is childless - what is the future of the ‘citizen’. This is an vital signal for a flourishing world - where we can grasp the crisis of consciousness that realizes humans as a single species in a single environment. What is it to be a human on the spaceship earth?
Citizens are splintered into two groups: natural citizens who claim citizenship as an automatic right and naturalized citizens who passively receive citizenship as a gift.

Rethinking Birthright

Amidst chants of “send her back,” it’s clear that we need a more just conception of citizenship—one that abolishes the distinction between “natural” and naturalized citizens.
On May 6, 2019, Archie Harrison Mountbatten-Windsor was born in a private London hospital to an American mother (Meghan Markle) and an English father (Prince Harry). Archie came into this world seventh in line to inherit the English Crown. He also came into the world the first member of the Royal family to be a citizen of both the United Kingdom and the United States. Importantly, however, at his birth, Archie became not only a citizen of the United States, he became a natural born citizen. This is the status first mentioned in Article II, Section 1 of the U.S. Constitution, which stipulates that “No Person except a natural born Citizen, or a Citizen of the United States, at the time of the Adoption of this Constitution, shall be eligible to the Office of President.”

This is a very interesting signal - not just of AI learning and the literary domain - but a way to start researching in a deeper way many venues of popular culture and its Zeitgeist - and/or many other domains of social science research.
"SentiArt is a very simplistic tool that can be used by non-experts to simply compare the words in their test text (i.e., the text they want to do a sentiment analysis on) with an excel sheet that they can download from my homepage for free," Jacobs explained. "In principle, the tool should work in any language for which you can download Facebook's so called vector space models, on the fastText webpage. While my study focuses on English and German, you could also use it in Malaysian, Farsi or a Chinese dialect, and a multitude of other languages, as fastText has vector space models for over 290 languages."

SentiArt: a sentiment analysis tool for profiling characters from world literature texts

Arthur Jacobs, a professor and researcher at Freie Universität Berlin, has recently developed SentiArt, a new machine learning technique to carry out sentiment analyses of literary texts, as well as both fictional and non-fictional figures. In his paper, set to be published by Frontiers in Robotics and AI, he applied this tool to passages and characters from the Harry Potter books.


Jacobs has a background in neurolinguistics, a branch of linguistics that explores the neural mechanisms associated with language acquisition, comprehension and expression. In his previous work, he has often investigated how machine learning tools could be used to analyze and better understand human language. He is particularly interested in what he calls computational poetics, an area of study that focuses on the use of computational tools to understand literary content.


"In 2011, I wrote a book with Austrian poet Raoul Schrott called 'Brain and Poetry,' where we speculated that it would help to develop sentiment analysis tools for literary texts and poetry, not only for movie reviews or Trump tweets, which appears to be the gold standard in classical sentiment analysis," Jacobs told TechXplore. "We also wanted to develop a tool that can predict human neuronal and behavioral data, not only self-reports collected via Amazon Turk."


In his new study, Jacobs tried to put some of the ideas introduced in his previous work into practice by developing a tool for analyzing sentiment in literary texts. The technique he proposed, called SentiArt, uses vector space models and theory-guided, empirically validated lists of labels to compute the valence of individual words in a text. Vector space models are representations of text documents as vectors of identifiers, which are often used to filter, retrieve or organize information.

A good signal - a 1.5 min video on state of holograms and assemblages of other AI

Microsoft hologram speaking Japanese

Microsoft is using its Mixed Reality studios and neural TTS engines to create a hologram of a person speaking in another language. The voice will sound just like the original person, but with a different language.

Another signal of the ‘Moore’s Law is Dead - Long Live Moore’s Law’ file.

Intel’s Neuromorphic System Hits 8 Million Neurons, 100 Million Coming by 2020

Researchers can use the 64-chip Pohoiki Beach system to make systems that learn and see the world more like humans
At the DARPA Electronics Resurgence Initiative Summit today in Detroit, Intel plans to unveil an 8-million-neuron neuromorphic system comprising 64 Loihi research chips—codenamed Pohoiki Beach. Loihi chips are built with an architecture that more closely matches the way the brain works than do chips designed to do deep learning or other forms of AI. For the set of problems that such “spiking neural networks” are particularly good at, Loihi is about 1,000 times as fast as a CPU and 10,000 times as energy efficient. The new 64-Loihi system represents the equivalent of 8-million neurons, but that’s just a step to a 768-chip, 100-million-neuron system that the company plans for the end of 2019.


Intel and its research partners are just beginning to test what massive neural systems like Pohoiki Beach can do, but so far the evidence points to even greater performance and efficiency, says Mike Davies, director of neuromorphic research at Intel.


Going from a single-Loihi to 64 of them is more of a software issue than a hardware one. “We designed scalability into the Loihi chip from the beginning,” says Davies. “The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step.”

A fascinating signal - that may be more important to enabling communities on earth as we deal with the impact of climate change and begin a struggle to make the earth a worthy art project for a flourishing future.

Silica aerogel could make Mars habitable

People have long dreamed of re-shaping the Martian climate to make it livable for humans. Carl Sagan was the first outside of the realm of science fiction to propose terraforming. In a 1971 paper, Sagan suggested that vaporizing the northern polar ice caps would "yield ~10 s g cm-2 of atmosphere over the planet, higher global temperatures through the greenhouse effect, and a greatly increased likelihood of liquid water."


Sagan's work inspired other researchers and futurists to take seriously the idea of terraforming. The key question was: are there enough greenhouse gases and water on Mars to increase its atmospheric pressure to Earth-like levels?


In 2018, a pair of NASA-funded researchers from the University of Colorado, Boulder and Northern Arizona University found that processing all the sources available on Mars would only increase atmospheric pressure to about 7 percent that of Earth—far short of what is needed to make the planet habitable.


Terraforming Mars, it seemed, was an unfulfillable dream.
Now, researchers from the Harvard University, NASA's Jet Propulsion Lab, and the University of Edinburgh, have a new idea. Rather than trying to change the whole planet, what if you took a more regional approach?

Evolution is about survival and adaptation is necessary to survive. Life changes the environment - which requires changes in living systems - and on and on. This is a good signal of how we adapt and change.

A (Very) Close Look at Carbon Capture and Storage

A material called ZIF-8 swells up when carbon dioxide molecules are trapped inside, new images reveal
A new kind of molecular-scale microscope has been trained for the first time on a promising wonder material for carbon capture and storage. The results, researchers say, suggest a few tweaks to this material could further enhance its ability to scrub greenhouse gases from emissions produced by traditional power plants.


The announcement comes in the wake of a separate study concerning carbon capture published in the journal Nature. The researchers involved in that study found that keeping the average global temperature change to below 1.5 degrees C (the goal of the Paris climate accords) may require more aggressive action than previously anticipated. It will not be enough, they calculated, to stop building new greenhouse-gas-emitting power stations and allow existing plants to age out of existence. Some existing plants will also need to be shuttered or retrofitted with carbon capture and sequestration technology.


The wonder material that could potentially help is a cage-like lattice inside which individual carbon dioxide (CO2) molecules can be trapped. Called a metal-organic framework (MOF), the material (consisting of metal ions attached like K’NEX hubs to rods made of organic molecules) also holds promise as a medium for drug therapies, desalination filters, nuclear waste containers, and photovoltaics.

Creating the energy to produce clean water could enable many new business models for food and energy production.
In lab experiments under a lamp whose illumination mimics the sun, a prototype device converted about 11 percent of incoming light into electricity. That’s comparable to commercial solar cells, which usually transform some 10 to 20 percent of the sunlight they soak up into usable energy. The researchers tested how well their prototype purified water by feeding saltwater and dirty water laced with heavy metals into the distiller. Based on those experiments, a device about a meter across is estimated to pump out about 1.7 kilograms of clean water per hour.

This solar-powered device produces energy and cleans water at the same time

Still a prototype, the machine could one day help curb electricity and freshwater shortages
By mounting a water distillation system on the back of a solar cell, engineers have constructed a device that doubles as an energy generator and water purifier.


While the solar cell harvests sunlight for electricity, heat from the solar panel drives evaporation in the water distiller below. That vapor wafts through a porous polystyrene membrane that filters out salt and other contaminants, allowing clean water to condense on the other side. “It doesn’t affect the electricity production by the [solar cell]. And at the same time, it gives you bonus freshwater,” says study coauthor Peng Wang, an engineer at King Abdullah University of Science and Technology in Thuwal, Saudi Arabia.


Solar farms that install these two-for-one machines could help meet the increasing global demand for freshwater while cranking out electricity, researchers report online July 9 in Nature Communications.


Using this kind of technology to tackle two big challenges at once “is a great idea,” says Jun Zhou, a materials scientist at Huazhong University of Science and Technology in Wuhan, China, not involved in the work.

One more signal of the transformation of energy geopolitics.

Kenya launches largest wind power plant in Africa

Kenya has launched Africa's largest wind power farm in a bid to boost electricity generating capacity and to meet the country's ambitious goal of 100% green energy by 2020.


The farm, known as the Lake Turkana Wind Power (LTWP) will generate around 310 megawatts of power to the national grid and will increase the country's electricity supply by 13%, President Uhuru Kenyatta said at the launch of the project on Friday.

In many ways we are entering the world of corporate space colonization (yes all those sci-fi movies may have a point). But this is a fascinating signal of an alternative between government and corporate space exploration.

Crowdfunded spacecraft LightSail 2 snaps amazing photos ahead of solar sail deployment

LightSail 2 may still have at least a few days to go before it begins its primary purpose, by unfurling the solar sail it has on board and finding out more about propelling a spacecraft using only the force of photons, but it’s not wasting any time on its orbital voyage. New photos from the crowdfunded spacecraft, which is operated by The Planetary Society, provide a stunning high-resolution look at the Earth from its unique vantage point.


The spacecraft just got a firmware update that corrected some issues with its orientation control after a test of its solar sailing mode, absent the actual use of the sail itself. The patch was uploaded successfully, according to The Planetary Society, and the spacecraft overall is “healthy and stable” as it stands. The earliest possible date for solar sail deployment is July 21, which is this Sunday, but that’ll depend on the mission team’s confidence in it actually being ready to unfurl and use successfully.


LightSail 2’s development was funded in part via a successful crowdfunding campaign run by the Bill Nye-led Planetary Society, and continues to seek on CrowdRise funding for its ongoing operation. Its goal is to test a spacecraft’s ability to fly powered only by the force of photons from the Sun striking a solar sail constructed of Mylar. This method of space-based transportation is extremely slow to get started, but thanks to the inertia-free medium of outer space, it could be an extremely energy-efficient way for research craft to travel long distances.


It launched on June 25 as part of the shared payload of SpaceX’s most recent Falcon Heavy launch.

The economic paradigm that dominates our world creates barriers to human capacity to explosively expand human knowledge in a way that enables evolutionary adaptive survival. Science needs to be ‘open-source’ serving human creative generativity for all.

The plan to mine the world’s research papers

A giant data store quietly being built in India could free vast swathes of science for computer analysis — but is it legal?
Carl Malamud is on a crusade to liberate information locked up behind paywalls — and his campaigns have scored many victories. He has spent decades publishing copyrighted legal documents, from building codes to court records, and then arguing that such texts represent public-domain law that ought to be available to any citizen online. Sometimes, he has won those arguments in court. Now, the 60-year-old American technologist is turning his sights on a new objective: freeing paywalled scientific literature. And he thinks he has a legal way to do it.


Over the past year, Malamud has — without asking publishers — teamed up with Indian researchers to build a gigantic store of text and images extracted from 73 million journal articles dating from 1847 up to the present day. The cache, which is still being created, will be kept on a 576-terabyte storage facility at Jawaharlal Nehru University (JNU) in New Delhi. “This is not every journal article ever written, but it’s a lot,” Malamud says. It’s comparable to the size of the core collection in the Web of Science database, for instance. Malamud and his JNU collaborator, bioinformatician Andrew Lynn, call their facility the JNU data depot.


No one will be allowed to read or download work from the repository, because that would breach publishers’ copyright. Instead, Malamud envisages, researchers could crawl over its text and data with computer software, scanning through the world’s scientific literature to pull out insights without actually reading the text.


The unprecedented project is generating much excitement because it could, for the first time, open up vast swathes of the paywalled literature for easy computerized analysis. Dozens of research groups already mine papers to build databases of genes and chemicals, map associations between proteins and diseases, and generate useful scientific hypotheses. But publishers control — and often limit — the speed and scope of such projects, which typically confine themselves to abstracts, not full text. Researchers in India, the United States and the United Kingdom are already making plans to use the JNU store instead. Malamud and Lynn have held workshops at Indian government laboratories and universities to explain the idea. “We bring in professors and explain what we are doing. They get all excited and they say, ‘Oh gosh, this is wonderful’,” says Malamud.

We continue to progress the domestication of DNA and human capacity to be a co-creator of evolution - the challenge is to be a great artist.

Personalized Cancer Vaccines in Clinical Trials

The field is young, but predicting antigens produced by patients’ malignant cells could yield successful treatments for individuals with a range of cancer types.
In 2014, at Washington University School of Medicine in St. Louis, six melanoma patients received infusions of an anticancer vaccine composed of their own dendritic cells. Our WashU colleagues had extracted immune cells from the patients’ blood two months earlier, cultured them in the lab, and mixed in peptides selected and synthesized based on specific mutations present in the genomes of each patient’s tumor. The cells had then taken up the peptides much as they take up foreign antigens in the body in the course of normal immune patrol. When the clinical team administered the vaccines—each patient received three infusions over several months—they hoped that the dendritic cells would induce activation and expansion of T cells capable of identifying and destroying the cancer cells, while sparing healthy tissue.


This first test of personalized cancer vaccines in people grew out of our collaborative efforts to develop a computational pipeline to identify tumor-unique mutations that could induce immune responses in cancer patients, helping them to fight their diseases. The pipeline’s origin can be traced to the ideas of Bob Schreiber, a cancer immunologist also at WashU. For many years, Schreiber had studied mice that developed sarcomas after exposure to a chemical carcinogen as a model system for characterizing the interactions between cancers and the immune system. In 2011, he approached us about the possibility of sequencing the DNA of these cancer cells to identify unique cancer peptides, or neoantigens, with the potential to stimulate the immune system against cancer. In contrast to cell-based immune therapies, which directly provide the patient with tumor-attacking T cells, the idea was that these neoantigens could be used to create vaccines that stimulate the differentiation of endogenous killer T cells.

And another amazing signal of new medical interventions.
Hopefully, this technology could lead to new therapeutic strategies not only for patients with spinal cord injury but for those with various inflammatory diseases.

An ‘EpiPen’ for spinal cord injuries

An injection of nanoparticles can prevent the body’s immune system from overreacting to trauma, potentially preventing some spinal cord injuries from resulting in paralysis.


The approach was demonstrated in mice at the University of Michigan, with the nanoparticles enhancing healing by reprogramming the aggressive immune cells—call it an “EpiPen” for trauma to the central nervous system, which includes the brain and spinal cord.


“In this work, we demonstrate that instead of overcoming an immune response, we can co-opt the immune response to work for us to promote the therapeutic response,” said Lonnie Shea, the Steven A. Goldstein Collegiate Professor of Biomedical Engineering.


U-M researchers have designed nanoparticles that intercept immune cells on their way to the spinal cord, redirecting them away from the injury. Those that reach the spinal cord have been altered to be more pro-regenerative.

This is a great site for anyone with knotty problems.

We’ve Got the Knots

Animated Knots by Grog is the web’s premiere site for learning how to tie knots of any kind. From Boating Knots, Fishing Knots and Climbing Knots to tying a tie, or even Surgical Knots — we’ve got it covered.


Follow along as ropes tie themselves, showing just the essential steps, so you can master a knot in no time. Jump into any category to get started. If you’re unsure where to begin, try starting with the Basics, our Knot of the Day or check out every knot we’ve got!