Thursday, August 8, 2019

Friday Thinking 9 Aug 2019

Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.

In the 21st Century curiosity will SKILL the cat.

Jobs are dying - Work is just beginning.
Work that engages our whole self becomes play that works.
Techne = Knowledge-as-Know-How :: Technology = Embodied Know-How  
In the 21st century - the planet is the little school house in the galaxy.
Citizenship is the battlefield of the 21st  Century

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content
Quotes:


Articles:





If we continue to operate in terms of a Cartesian dualism of mind versus matter, we shall probably also continue to see the world in terms of God versus man; elite versus people; chosen race versus others; nation versus nation; and man versus environment. It is doubtful whether a species having both an advanced technology and this strange way of looking at its world can endure.

Impossible choices

Gregory Bateson saw the creative potential of paradox


aspirin was discovered in 1897, and an explanation of how it works followed in 1995. That, in turn, has spurred some research leads on making better pain relievers through something other than trial and error.


This kind of discovery — answers first, explanations later — I call “intellectual debt.” We gain insight into what works without knowing why it works. We can put that insight to use immediately, and then tell ourselves we’ll figure out the details later. Sometimes we pay off the debt quickly; sometimes, as with aspirin, it takes a century; and sometimes we never pay it off at all.


Be they of money or ideas, loans can offer great leverage. We can get the benefits of money — including use as investment to produce more wealth — before we’ve actually earned it, and we can deploy new ideas before having to plumb them to bedrock truth.


Indebtedness also carries risks. For intellectual debt, these risks can be quite profound, both because we are borrowing as a society, rather than individually, and because new technologies of artificial intelligence — specifically, machine learning — are bringing the old model of drug discovery to a seemingly unlimited number of new areas of inquiry. Humanity’s intellectual credit line is undergoing an extraordinary, unasked-for bump up in its limit.


To understand the problems with intellectual debt despite its boon, it helps first to consider a sibling: engineering’s phenomenon of technical debt.

Intellectual Debt: With Great Power Comes Great Ignorance



“The bottom line was that there would be a supernova close enough to the Earth to drastically affect the ozone layer about once every billion years,” says Gehrels, who still works at Goddard. That’s not very often, he admits, and no threatening stars prowl the solar system today. But Earth has existed for 4.6 billion years, and life for about half that time, meaning the odds are good that a supernova blasted the planet sometime in the past. The problem is figuring out when. Because supernovas mainly affect the atmosphere, it’s hard to find the smoking gun,” Gehrels says.


Astronomers have searched the surrounding cosmos for clues, but the most compelling evidence for a nearby supernova comes—somewhat paradoxically—from the bottom of the sea. Here, a dull and asphalt black mineral formation called a ferromanganese crust grows on the bare bedrock of underwater mountains—incomprehensibly slowly. In its thin, laminated layers, it records the history of planet Earth and, according to some, the first direct evidence of a nearby supernova.


...Based on the concentration of Fe-60 in the crust, Knie estimated that the supernova exploded at least 100 light-years from Earth—three times the distance at which it could’ve obliterated the ozone layer—but close enough to potentially alter cloud formation, and thus, climate. While no mass-extinction events happened 2.8 million years ago, some drastic climate changes did take place—and they may have given a boost to human evolution. Around that time, the African climate dried up, causing the forests to shrink and give way to grassy savanna. Scientists think this change may have encouraged our hominid ancestors as they descended from trees and eventually began walking on two legs.


 as any young theory, is still speculative and has its opponents. Some scientists think Fe-60 may have been brought to Earth by meteorites, and others think these climate changes can be explained by decreasing greenhouse gas concentrations, or the closing of the ocean gateway between North and South America. But Knie’s new tool gives scientists the ability to date other, possibly more ancient, supernovas that may have passed in the vicinity of Earth, and to study their influence on our planet.

The Secret History of the Supernova at the Bottom of the Sea



This is a strong signal of change - but not just of the retail food business - but of the whole retail paradigm. A very long read (8K words) but illuminating for those interested in the future of business and work. - Reading the subtext - are marketers drinking their own Kool-Aid? Isn’t there a dimension of not just ‘fake news in this approach to the gamification of faux-experiences - all in the name of consuming more?
Forget branding. Forget sales. Kelley’s main challenge is redirecting the attention of older male executives, scared of the future and yet stuck in their ways, to the things that really matter.
I make my living convincing male skeptics of the power of emotions,” he says.

The Man Who’s Going to Save Your Neighborhood Grocery Store

American food supplies are increasingly channeled through a handful of big companies: Amazon, Walmart, FreshDirect, Blue Apron. What do we lose when local supermarkets go under? A lot — and Kevin Kelley wants to stop that.
Part square-jawed cattle rancher, part folksy CEO, Niemann is the last person you’d expect to ask for a fresh start. He’s spent his whole life in the business, transforming the grocery chain his grandfather founded in 1917 into a regional powerhouse with more than 100 supermarkets and convenience stores across four states. In 2014, he was elected chair of the National Grocery Association. It’s probably fair to say no one alive knows how to run a grocery store better than Rich Niemann. Yet Niemann was no longer sure the future had a place for stores like his.


He was right to be worried. The traditional American supermarket is dying. It’s not just Amazon’s purchase of Whole Foods, an acquisition that trade publication Supermarket News says marked “a new era” for the grocery business — or the fact that Amazon hopes to launch a second new grocery chain in 2019, according to a recent report from The Wall Street Journal, with a potential plan to scale quickly by buying up floundering supermarkets. Even in plush times, grocery is a classic “red ocean” industry, highly undifferentiated and intensely competitive. (The term summons the image of a sea stained with the gore of countless skirmishes.) Now, the industry’s stodgy old playbook — “buy one, get one” sales, coupons in the weekly circular — is hurtling toward obsolescence. And with new ways to sell food ascendant, legacy grocers like Rich Niemann are failing to bring back the customers they once took for granted. You no longer need grocery stores to buy groceries.


….Harvest Market is the anti-Amazon. It’s designed to excel at what e-commerce can’t do: convene people over the mouth-watering appeal of prize ingredients and freshly prepared food. The proportion of groceries sold online is expected to swell over the next five or six years, but Harvest is a bet that behavioral psychology, spatial design, and narrative panache can get people excited about supermarkets again. Kelley isn’t asking grocers to be more like Jeff Bezos or Sam Walton. He’s not asking them to be ruthless, race-to-the-bottom merchants. In fact, he thinks that grocery stores can be something far greater than we ever imagined — a place where farmers and their urban customers can meet, a crucial link between the city and the country.


But first, if they’re going to survive, Kelley says, grocers need to start thinking like Alfred Hitchcock.

Here’s a signal of proliferation of curiosities when more people have access to the means of expression. There is a 10 min, a 17 min and a 3 min example videos for those curious about curiosities.
Mukbang, a portmanteau of the Korean words for “eating” and “broadcasting,” first popped up on Korean live-streaming sites like Afreeca in 2010. Mukbang hosts provide company for viewers dining alone and, in some cases, act as their avatars, eating whatever the audience wanted them to eat.

IN THE COUNTRY WHERE MUKBANG WAS INVENTED, THIS YOUTUBER IS PUSHING THE GENRE

ASMR + mukbang + supersized foods = content gold
If the point of mukbang is to provide viewers with the secondhand satisfaction of watching someone else eat delicious food, then the ASMR mukbang videos of South Korean YouTuber Yammoo satisfy a craving you didn’t even know you had. Toward the stranger spectrum of the “food” she’s eaten: stones (made of chocolate) or light bulbs (made of candy). The video where she eats balls of air pollution (made of cotton candy) is a perfect example of how the creator is pushing the decade-old mukbang genre into its next evolution.


With air pollution levels at a record high, Seoul citizens have had to incorporate face masks, air quality apps, and home air purifiers into their daily lives. “This is a mukbang everyone can join in together,” Yammoo tells her viewers. “If you’re home, open your windows — if you’re outside, you’re already participating.” She removed her black face mask and began eating the grayish cotton candy balls, her chewing sounds amplified by the microphone. “The air quality levels were good today because you ate all of the pollution ^^♥,” wrote a commenter.

It is a truism of our times that life-long learning is no longer a luxury of those with both the intrinsic motivation and leisurely opportunity to fulfill that motivation. It is a requirement of an employable life-long career. But what most of us forget is the high degree of unlearning that is also required to embrace accelerating change. 
Maybe that is one of the key uses of 'narrative' in creating cultural coherence in the face of change & forgetting  - and key to our sense of a self continuity in the face of ephemeral memory.
Much is still unknown about how memories are created and accessed, and addressing such mysteries has consumed a lot of memory researchers’ time. How the brain forgets, by comparison, has been largely overlooked. 


… its dynamic nature is not a flaw but a feature, Frankland says — something that evolved to aid learning. The environment is changing constantly and, to survive, animals must adapt to new situations. Allowing fresh information to overwrite the old helps them to achieve that.its dynamic nature is not a flaw but a feature, Frankland says — something that evolved to aid learning. The environment is changing constantly and, to survive, animals must adapt to new situations. Allowing fresh information to overwrite the old helps them to achieve that.

The forgotten part of memory

Long thought to be a glitch of memory, researchers are coming to realize that the ability to forget is crucial to how the brain works.
Memories make us who we are. They shape our understanding of the world and help us to predict what’s coming. For more than a century, researchers have been working to understand how memories are formed and then fixed for recall in the days, weeks or even years that follow. But those scientists might have been looking at only half the picture. To understand how we remember, we must also understand how, and why, we forget.


Until about ten years ago, most researchers thought that forgetting was a passive process in which memories, unused, decay over time like a photograph left in the sunlight. But then a handful of researchers who were investigating memory began to bump up against findings that seemed to contradict that decades-old assumption. They began to put forward the radical idea that the brain is built to forget.


A growing body of work, cultivated in the past decade, suggests that the loss of memories is not a passive process. Rather, forgetting seems to be an active mechanism that is constantly at work in the brain. In some — perhaps even all — animals, the brain’s standard state is not to remember, but to forget. And a better understanding of that state could lead to breakthroughs in treatments for conditions such as anxiety, post-traumatic stress disorder (PTSD), and even Alzheimer’s disease.

This is an interesting signal of a potential new occupation in the world of Big Data and algorithms - Data Detective - or Data Vigilante Hero - Dat Man.
“His technique has been shown to be incredibly useful,” says Paul Myles, director of anaesthesia and perioperative medicine at the Alfred hospital in Melbourne, Australia, who has worked with Carlisle to examine research papers containing dodgy statistics. “He’s used it to demonstrate some major examples of fraud.”
Carlisle believes that he is helping to protect patients, which is why he spends his spare time poring over others’ studies. “I do it because my curiosity motivates me to do so,” he says, not because of an overwhelming zeal to uncover wrongdoing: “It’s important not to become a crusader against misconduct.”

How a data detective exposed suspicious medical trials

Anaesthetist John Carlisle has spotted problems in hundreds of research papers — and spurred a leading medical journal to change its practice.
By day, Carlisle is an anaesthetist working for England’s National Health Service in the seaside town of Torquay. But in his spare time, he roots around the scientific record for suspect data in clinical research. Over the past decade, his sleuthing has included trials used to investigate a wide range of health issues, from the benefits of specific diets to guidelines for hospital treatment. It has led to hundreds of papers being retracted and corrected, because of both misconduct and mistakes. And it has helped to end the careers of some large-scale fakers: of the six scientists worldwide with the most retractions, three were brought down using variants of Carlisle’s data analyses.


Together with the work of other researchers who doggedly check academic papers, his efforts suggest that the gatekeepers of science — journals and institutions — could be doing much more to spot mistakes. In medical trials, the kind that Carlisle focuses on, that can be a matter of life and death.


This is a good signal in the emerging development of AI. For about a decade protein folding research has taken advantage of crowdsourcing by using humans to engage in a game called FoldIt. Well that may not have another decade left.

AI protein-folding algorithms solve structures faster than ever

Deep learning makes its mark on protein-structure prediction.
The race to crack one of biology’s grandest challenges — predicting the 3D structures of proteins from their amino-acid sequences — is intensifying, thanks to new artificial-intelligence (AI) approaches.


At the end of last year, Google’s AI firm DeepMind debuted an algorithm called AlphaFold, which combined two techniques that were emerging in the field and beat established contenders in a competition on protein-structure prediction by a surprising margin. And in April this year, a US researcher revealed an algorithm that uses a totally different approach. He claims his AI is up to one million times faster at predicting structures than DeepMind’s, although probably not as accurate in all situations.


The latest algorithm’s creator, Mohammed AlQuraishi, a biologist at Harvard Medical School in Boston, Massachusetts, hasn’t yet directly compared the accuracy of his method with that of AlphaFold — and he suspects that AlphaFold would beat his technique in accuracy when proteins with sequences similar to the one being analysed are available for reference. But he says that because his algorithm uses a mathematical function to calculate protein structures in a single step — rather than in two steps like AlphaFold, which uses the similar structures as groundwork in the first step — it can predict structures in milliseconds rather than hours or days.


Technical difficulties meant AlQuraishi’s algorithm did not perform well at CASP13. He published details of the AI in Cell Systems in April1 and made his code publicly available on GitHub, hoping others will build on the work. (The structures for most of the proteins tested in CASP13 have not been made public yet, so he still hasn’t been able to directly compare his method with AlphaFold.)

And more about using AI to enhance human research capacity.
"This confirmed that SAM not only had the ability to identify good drugs but in fact had come up with better human immune drugs than currently exist," 
Petrovsky said this potentially shortens the normal drug discovery and development process by decades and saves hundreds of millions of dollars.

Australian Researchers Have Just Released The World's First AI-Developed Vaccine

A team at Flinders University in South Australia has developed a new vaccine believed to be the first human drug in the world to be completely designed by artificial intelligence (AI).


While drugs have been designed using computers before, this vaccine went one step further being independently created by an AI program called SAM (Search Algorithm for Ligands).


Flinders University Professor Nikolai Petrovsky who led the development told Business Insider Australia its name is derived from what it was tasked to do: search the universe for all conceivable compounds to find a good human drug (also called a ligand).


"We had to teach the AI program on a set of compounds that are known to activate the human immune system, and a set of compounds that don't work. The job of the AI was then to work out for itself what distinguished a drug that worked from one that doesn't," Petrovsky said, who is also the Research Director of Australian biotechnology company Vaxine.

This is an interesting signal of how to potentially protect people from HIV - but also a new paradigm of vaccine. 

This implant could prevent HIV infection

A tiny implant may prevent a person from getting HIV for a year, reports the New York Times.
The implant: It’s a plastic tube the size of a matchstick that slowly releases an anti-HIV drug. It would be placed under the skin of the arm.


HIV prevention: Even if you don't have the virus, taking anti-HIV drugs daily can stop you from getting infected. Such "pre-exposure prophylaxis," or PrEP, is for people at high risk of getting the virus, according to the Centers for Disease Control and Prevention. 


Set and forget: The idea behind the implant is that it would make PrEP easier. Because it would release an antiviral drug little by little over months, people would not have to remember to swallow pills. It’s based on a similar implant for birth control.


The evidence: The device is being developed by Merck, which carried out a three-month-long preliminary test in just 12 men. It contains an experimental, but long-acting, anti-HIV drug called islatravir. The company presented the prototype today at an HIV science conference in Mexico City. 

This is a fascinating weak signal related to the domestication of DNA (and thus also of proteomics) and the alleviation of allergies. 

Giving cats food with an antibody may help people with cat allergies

Pet-food maker Purina is studying how adding an antibody to the chow curbs reactions
Cat lovers who sneeze and sniffle around their feline friends might one day find at least partial relief in a can of cat food.


New research suggests that feeding cats an antibody to the major allergy-causing protein in cats renders some of the protein, called Fel d1, unrecognizable to the human immune system, reducing an allergic response. After 105 cats were fed the antibody for 10 weeks, the amount of active Fel d1 protein on the cats’ hair dropped by 47 percent on average, researchers from pet food–maker Nestlé Purina report in the June Immunity, Inflammation and Disease.


And in a small pilot study, 11 people allergic to cats experienced substantially reduced nasal symptoms and less itchy, scratchy eyes when exposed in a test chamber to hair from cats fed the antibody diet, compared with cats fed a control diet. The preliminary findings were presented in Lisbon, Portugal at the European Academy of Allergy and Clinical Immunology Congress in June.


The Fel d1 protein is produced in cats’ salivary and sebaceous glands. Cats transfer the protein to their hair when they groom by licking themselves and excrete it in their urine. Humans are then exposed to it on cat hair and dander — dead skin — or in the litter box. Cat allergies plague up to 20 percent of people, and Fel d1 is responsible for 95 percent of allergic reactions to cats.

An excellent signal of the continuing development of other computational paradigms. There is a 3 min video explanation as well.
With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware. Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time.

Brains scale better than CPUs. So Intel is building brains

The new Pohoiki Beach builds on the 2017 success of Intel's Loihi NPU.
Neuromorphic engineering—building machines that mimic the function of organic brains in hardware as well as software—is becoming more and more prominent. The field has progressed rapidly, from conceptual beginnings in the late 1980s to experimental field programmable neural arrays in 2006, early memristor-powered device proposals in 2012, IBM's TrueNorth NPU in 2014, and Intel's Loihi neuromorphic processor in 2017. Yesterday, Intel broke a little more new ground with the debut of a larger-scale neuromorphic system, Pohoiki Beach, which integrates 64 of its Loihi chips.


Where traditional computing works by running numbers through an optimized pipeline, neuromorphic hardware performs calculations using artificial "neurons" that communicate with each other. This is a workflow that's highly specialized for specific applications, much like the natural neurons it mimics in function—so you likely won't replace conventional computers with Pohoiki Beach systems or its descendants, for the same reasons you wouldn't replace a desktop calculator with a human mathematics major.


However, neuromorphic hardware is proving able to handle tasks organic brains excel at much more efficiently than conventional processors or GPUs can. Visual object recognition is perhaps the most widely realized task where neural networks excel, but other examples include playing foosball, adding kinesthetic intelligence to prosthetic limbs, and even understanding skin touch in ways similar to how a human or animal might understand it.


Loihi, the underlying chip Pohoiki Beach is integrated from, consists of 130,000 neuron analogs—hardware-wise, this is roughly equivalent to half of the neural capacity of a fruit fly. Pohoiki Beach scales that up to 8 million neurons—about the neural capacity of a zebrafish. But what's perhaps more interesting than the raw computational power of the new neural network is how well it scales.


Here is a 17 min video explain neuromorphic computing

What Is Neuromorphic Computing (Cognitive Computing)

we'll discuss, what cognitive computing is, more specifically – the difference between current computing Von Neuman architecture and more biologically representative neuromorphic architecture and how these two paired together will yield massive performance and efficiency gains!
Following that we’ll discuss, the benefits of cognitive computing systems further as well as current cognitive computing initiatives, TrueNorth and Loihi.
To conclude we’ll extrapolate and discuss the future of cognitive computing in terms of brain simulation, artificial intelligence and brain-computer interfaces!

This is a weak signal - but fascinating - given the increasing interest in the hard problem of consciousness and the nature of the universe.

The Strange Similarity of Neuron and Galaxy Networks

Your life’s memories could, in principle, be stored in the universe’s structure.
Christof Koch, a leading researcher on consciousness and the human brain, has famously called the brain “the most complex object in the known universe.” It’s not hard to see why this might be true. With a hundred billion neurons and a hundred trillion connections, the brain is a dizzyingly complex object.


But there are plenty of other complicated objects in the universe. For example, galaxies can group into enormous structures (called clusters, superclusters, and filaments) that stretch for hundreds of millions of light-years. The boundary between these structures and neighboring stretches of empty space called cosmic voids can be extremely complex.1 Gravity accelerates matter at these boundaries to speeds of thousands of kilometers per second, creating shock waves and turbulence in intergalactic gases. We have predicted that the void-filament boundary is one of the most complex volumes of the universe, as measured by the number of bits of information it takes to describe it.


This got us to thinking: Is it more complex than the brain?
So we—an astrophysicist and a neuroscientist—joined forces to quantitatively compare the complexity of galaxy networks and neuronal networks. The first results from our comparison are truly surprising: Not only are the complexities of the brain and cosmic web actually similar, but so are their structures. The universe may be self-similar across scales that differ in size by a factor of a billion billion billion.

Another signal in the growing body of research indicating the importance of our microbial profile for physical and mental health.

Boosting a gut bacterium helps mice fight an ALS-like disease

People with Lou Gherig's disease appear to have a dearth of the microbes
Mice that develop a degenerative nerve disease similar to amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, fared better when bacteria making vitamin B3 were living in their intestines, researchers report July 22 in Nature. Those results suggest that gut microbes may make molecules that can slow progression of the deadly disease.


The researchers uncovered clues that the mouse results may also be important for people with ALS. But the results are too preliminary to inform any changes in treating the disease, which at any given time affects about two out of every 100,000 people, or about 16,000 people in the United States, says Eran Elinav, a microbiome researcher at the Weizmann Institute of Science in Rehovot, Israel.


Microbiomes of ALS mice contained almost no Akkermansia muciniphila bacteria. Restoring A. muciniphila in the ALS mice slowed progression of the disease, and the mice lived longer than untreated rodents. By contrast, greater numbers of two other normal gut bacteria, Ruminococcus torques and Parabacteroides distasonis, were associated with more severe symptoms.

This is an excellent signal of the positive potential of domesticating DNA.

This is the first fungus known to host complex algae inside its cells

It’s unclear if the newly discovered alliance exists in the wild
A soil fungus and a marine alga have formed a beautiful friendship.


In a lab dish, scientists grew the fungus Mortierella elongata with a photosynthetic alga called Nannochloropsis oceanica. This odd couple formed a mutually beneficial team that kept each other going when nutrients such as carbon and nitrogen were scarce, researchers report July 16 in eLife.


Surprisingly, after about a month together, the partners got even cozier. Algal cells began growing inside the fungi’s super long cells called hyphae — the first time that scientists have identified a fungus that can harbor eukaryotic algae inside itself. (In eukaryotic cells, DNA is stored in the nucleus.) In lichens, a symbiotic pairing of fungi and algae, the algae remain outside of the fungal cells. 


In the new study, biochemist Zhi-Yan Du of Michigan State University in East Lansing and his colleagues used heavy forms of carbon and nitrogen to trace the organisms’ nutrient exchange. The fungi passed more than twice as much nitrogen to their algal partners as the algae sent to the fungi, the team found. And while both partners lent each other carbon, the algal cells had to touch the fungi’s hyphae cells to make their carbon deliveries.


It’s unclear if the newly discovered alliance exists in the wild. But both N. oceanica and M. elongata are found around the world, and could interact in places such as tidal zones. Learning more about how the duo teams up may shed light on how symbiotic partnerships evolve.

An excellent signal of the emergence of new methodologies in biological research by leveraging AI and other technologies. There is an excellent 2 min video explanation as well.
What’s so hard about seeing inside a living cell?
If you want to look at a cell when it’s alive, there are basically two limitations. We can blast the cell with laser light to get these [fluorescent protein] labels to illuminate. But that laser light is phototoxic — the cell is just basically baking in the sun in the desert.
The other limitation is that these labels are attached to an original protein in the cell that needs to go somewhere and do things. But the protein now has this big stupid fluorescent molecule attached to it. That might change the way the cell works if I have too many labels. Sometimes when you try to introduce these fluorescent labels, your experiment just doesn’t work out. Sometimes, the labels are lethal to the cell.

His Artificial Intelligence Sees Inside Living Cells

Instead of performing expensive fluorescence imaging experiments, scientists can use this “label-free determination” to efficiently assemble high-fidelity, three-dimensional movies of the interiors of living cells.


The data can also be used to build a biologically accurate model of an idealized cell — something like the neatly labeled diagram in a high school textbook but with greater scientific accuracy. That’s the goal of the institute’s project.


Johnson’s use of machine learning to visualize cell interiors began in 2010 at Carnegie Mellon University, just before a series of breakthroughs in deep learning technology began to transform the field of artificial intelligence. Nearly a decade later, Johnson thinks that his AI-augmented approach to live cell imaging could lead to software models that are so accurate, they reduce or even altogether eliminate the need for certain experiments. “We want to be able to take the cheapest image [of a cell] that we can and predict from that as much about that cell as we possibly can,” he said. “How is it organized? What’s the gene expression? What are its neighbors doing? For me, [label-free determination] is just a prototype for much more sophisticated things to come.”

No comments:

Post a Comment