Thursday, January 25, 2018

Friday Thinking 26 Jan. 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content

Quotes:

Articles:



The historian Samuel Edgerton recounts this remarkable segue into modern science in The Heritage of Giotto’s Geometry (1991), noting how the overthrow of Aristotelian thinking about space was achieved in part as a long, slow byproduct of people standing in front of perspectival paintings and feeling, viscerally, as if they were ‘looking through’ to three-dimensional worlds on the other side of the wall. What is so extraordinary here is that, while philosophers and proto-scientists were cautiously challenging Aristotelian precepts about space, artists cut a radical swathe through this intellectual territory by appealing to the senses. In a very literal fashion, perspectival representation was a form of virtual reality that, like today’s VR games, aimed to give viewers the illusion that they had been transported into geometrically coherent and psychologically convincing other worlds.

The illusionary Euclidean space of perspectival representation that gradually imprinted itself on European consciousness was embraced by Descartes and Galileo as the space of the real world. Worth adding here is that Galileo himself was trained in perspective. His ability to represent depth was a critical feature in his groundbreaking drawings of the Moon, which depicted mountains and valleys and implied that the Moon was as solidly material as the Earth.

A view is emerging among some theoretical physicists that space might in fact be an emergent phenomenon created by something more fundamental, in much the same way that temperature emerges as a macroscopic property resulting from the motion of molecules. As Dijkgraaf put it: ‘The present point of view thinks of space-time not as a starting point, but as an end point, as a natural structure that emerges out of the complexity of quantum information.’

A leading proponent of new ways of thinking about space is the cosmologist Sean Carroll at Caltech, who recently said that classical space isn’t ‘a fundamental part of reality’s architecture’, and argued that we are wrong to assign such special status to its four or 10 or 11 dimensions. Where Dijkgraaf makes an analogy with temperature, Carroll invites us to consider ‘wetness’, an emergent phenomenon of lots of water molecules coming together. No individual water molecule is wet, only when you get a bunch of them together does wetness come into being as a quality. So, he says, space emerges from more basic things at the quantum level.

Radical dimensions




The most basic way to tell the feminine kind of story is in the form of a whispered rumor at work. The carrier-bag novel can be understood as egalitarian forager-society gossip, reified, elevated and distilled into enduring emergent social truths. Signal and noise snowball, via a game of telephone, until the rumor becomes part of the collective unconscious, as an acknowledged truth with no author. If the hero’s journey brings narrative rents to heroes, carrier-bag tales allow narrative tax revenue to accrue to a Weltanschauung.

There is a pleasing symmetry here. Myth and truth. Stories with and without authors. Stories that hunt between contexts and stories that nest within a context. Self-consciously installed myths that never quite sink into their carrier-bag contexts as lived truths, and lived truths that never quite pop from their carrier contexts as as explicit beliefs or narrative patterns. New information versus open secrets. Narrative rents and context taxes. There-and-back-again finite game stories, play-to-continue-the-game infinite game stories (that last is a reference to James Carse’s finite/infinite game model, which is almost required reading around here now).

Boat Stories




I propose that it has become literally unthinkable to do good work in any interesting field with the premises of individualism, methodological individualism, and human exceptionalism. None of the most generative and creative intellectual work being done today any longer spends much time (except as a kind of footnote) talking, doing creative work with the premises of individualism and methodological individualism, and I’ll try to illustrate that a bit, primarily from some of the natural sciences.

Simultaneously, there has been an explosion within the biologies of multispecies becoming-with, of an understanding that to be a one at all, you must be a many and it’s not a metaphor. That it’s about the tissues of being anything at all. And that those who are have been in relationality all the way down. There is no place that the layers of the onion come to rest on some kind of foundation.

Haraway -Anthropocene, Capitalocene, Chthulucene: Staying with Trouble




Within weeks of conception, cells from both mother and foetus traffic back and forth across the placenta, resulting in one becoming a part of the other. During pregnancy, as much as 10 per cent of the free-floating DNA in the mother’s bloodstream comes from the foetus, and while these numbers drop precipitously after birth, some cells remain. Children, in turn, carry a population of cells acquired from their mothers that can persist well into adulthood, and in the case of females might inform the health of their own offspring. And the foetus need not come to full term to leave its lasting imprint on the mother: a woman who had a miscarriage or terminated a pregnancy will still harbour foetal cells. With each successive conception, the mother’s reservoir of foreign material grows deeper and more complex, with further opportunities to transfer cells from older siblings to younger children, or even across multiple generations.

Far from drifting at random, human and animal studies have found foetal origin cells in the mother’s bloodstream, skin and all major organs, even showing up as part of the beating heart. This passage means that women carry at least three unique cell populations in their bodies – their own, their mother’s, and their child’s – creating what biologists term a microchimera, named for the Greek fire-breathing monster with the head of a lion, the body of a goat, and the tail of a serpent.

The self emerging from microchimeric research appears to be of a different order: porous, unbounded, rendered constituently. Nelson suggests that each human being is not so much an isolated island as a dynamic ecosystem. And if this is the case, the question follows as to how this state of collectivity changes our conscious and unconscious motivations. If I am both my children and my mother, if I carry traces of my sibling and remnants of pregnancies that never resulted in birth, does that change who I am and the way I behave in the world? If we are to take to heart Whitman’s multitudes, we encounter an I composed of shared identity, collective affiliations and motivations that emerge not from a mean and solitary struggle, but a group investment in greater survival.

We are multitudes




Knowledge-mobilizing space (ba)
“Having all the different departments work on the project together meant things went slow, but the ba was great, and the breakthrough wouldn’t have been possible otherwise.”

Ba is about the arrangement of elements to create connections that are more likely to produce new knowledge or experiences. While wa focuses on relationships, ba is concerned with how knowledge is formed and shared. If wa is about social and interpersonal harmony, ba is about ensuring that people’s knowledge and experience can be put to good use.

The open-office concept is a reflection of ba as a design principal. Japanese offices are often very open with many workers sharing a large table and workspace. This arrangement allows for the rapid sharing of information, sometimes by accident. The Japanese also prioritize interdisciplinary teams because they believe that the concentration of different ways of seeing the world will lead to breakthroughs. There is often a lack of efficiency when bringing together different specializations, but ba requires shared space for different relationships and experiences to be brought forward.

To endow our lives with ba, we might follow social media accounts that are outside of our experience or tastes, attend events or conferences outside of our specialization, and meet and interact with people we might not normally meet. Ba asks us to be open to interruptions and distractions when our temptation is to be closed and focused. The assumption is that what we know is only valuable if it rubs up against what other people know.

The Japanese words for “space” could change your view of the world





In the first month of 2018 - it helpful to have some foundation for optimism. That the world has in fact made progress - which means we can make more progress. Very importantly Pinker points out all the reasons many many people fall to a pessimistic default position.

STEVEN PINKER - ENLIGHTENMENT NOW - Does Progress exist ?

Optimism about human progress in the World is rational and measurable : peace, life expectancy, literacy, wealth, etc. ARE improving. :D
Lecture given in April 2017 with the Cambridge Conservation Initiative.


This is a great 25 min video by Donna Harraway.

“Anthropocene, Capitalocene, Chthulucene: Staying with the Trouble”

Sympoiesis, not autopoiesis, threads the string figure game played by Terran critters. Always many-stranded, SF is spun from science fact, speculative fabulation, science fiction, and, in French, soin de ficelles (care of/for the threads). The sciences of the mid-20th-century “new evolutionary synthesis” shaped approaches to human-induced mass extinctions and reworldings later named the Anthropocene. Rooted in units and relations, especially competitive relations, these sciences have a hard time with three key biological domains: embryology and development, symbiosis and collaborative entanglements, and the vast worlds of microbes. Approaches tuned to “multi-species becoming with” better sustain us in staying with the trouble on Terra. An emerging “new new synthesis” in trans-disciplinary biologies and arts proposes string figures tying together human and nonhuman ecologies, evolution, development, history, technology, and more. Corals, microbes, robotic and fleshly geese, artists, and scientists are the dramatis personae in this talk’s SF game.


This is a wonderful 15 Min TED Talk by David Deutsch - for anyone interested in the foundations of good scientific knowledge - this is a delight.

A new way to explain explanation

For tens of thousands of years our ancestors understood the world through myths, and the pace of change was glacial. The rise of scientific understanding transformed the world within a few centuries. Why? Physicist David Deutsch proposes a subtle answer.


This is a great podcast conversation that is both very comprehensive, and insightful - anyone interested in deepening their understanding will find this rewarding.

Fred Ehrsam | Cryptocurrency's Past, Present & Future

Cryptocurrency: From Basic Definitions to Expert Issues in One Mighty Interview
You’d have to be living in some kind of a news blackout not to have heard chatter about cryptocurrencies recently. The granddaddy of ‘em all – BitCoin – has appreciated roughly 2000% over the past twelve months. This puts the total value of all BitCoin close $300B, making it more valuable than roughly 490 of the companies in the Fortune 500 – and far more valuable than any of the banks that were deemed too big to fail during the financial crisis.

So what in the world is going on here? As with all large markets, nobody fully knows. But my interviewee in this episode, Fred Ehrsam, knows this area better than almost anyone. In 2012, he co-founded CoinBase, which is by far the world’s largest consumer-friendly service for storing and trading cryptocurrencies (though its users include many large nonconsumers as well).

Although our interview is a spontaneous conversation, Fred and I both put methodical thought into sequencing our topics, as well as the level of depth that we treat each with. The result is a robust introduction for who know nothing about cryptocurrencies, which can also truly fire the neurons of experts in this field. Will AI’s start running on the block chain? Could a full-fledged Uber, Lyft, or AirBnB competitor exist as a cloud-based Smart Contract? And how might the emergence of Ethereum stand in certain a line of historic events that stretches back before the Bronze Age?


This may be coming to a superstore near us soon. Sensors, Internet of Things, AI and more.

Walmart is taking a direct shot at Amazon and making checkout lanes obsolete

Walmart is rolling out its "Scan & Go" technology to 100 additional stores by the end of January.

The technology enables shoppers to scan and pay for items without checkout lanes, registers, or cashiers.

Amazon and Kroger have been developing similar technology. Kroger is rolling out its own "Scan, Bag, Go" service to 400 stores this year.


The progress in robots continues on an exponential scale benefiting from other advance in other domains. There are two short videos as well.

Harvard's milliDelta Robot Is Tiny and Scary Fast

In terms of sheer speed and precision, delta robots are some of the most impressive to watch. They’re also some of the most useful, for the same reasons—you can see them doing pick-and-place tasks in factories of all kinds, far faster than humans can. The delta robots that we’re familiar with are mostly designed as human-replacement devices, but as it turns out, scaling them down makes them even more impressive. In Robert Wood’s Microrobotics Lab at Harvard, researcher Hayley McClintock has designed one of the tiniest delta robots ever. Called milliDelta, it may be small, but it’s one of the fastest moving and most precise robots we’ve ever seen.

Delta robots have two things about them that are particularly clever. The first one is that despite the highly dynamic nature of a delta robot, its motors are stationary. Most robot arms are made up of a series of rigid links and joints with motors in them, which is fine, except that it makes the arm itself very heavy. Moving all the motors to the base of the robot instead means that there’s way less mass that you have to move around, which is how delta robots can, in general, accelerate so rapidly and move so precisely. The second clever thing is that the end-effector of a delta robot—the bit where the arms come together—can stay parallel to the work surface (delta robots are a type of parallel robot). This makes delta robots ideal for pick-and-place operations, since they maintain the orientation of the thing you’re picking up.

Harvard’s delta robot takes all of this cleverness and shrinks it down into a fearsome little package. The 15 mm x 15 mm x 20 mm robot weighs just 430 milligrams, but it has a payload capacity of 1.3 grams. It can move around its 7 cubic millimeter workspace with a precision of about 5 micrometers. What’s really impressive, though, is the speed: It can reach velocities of 0.45 m/s, and accelerations of 215 m/s2, meaning that it can follow repeating patterns at a frequency of up to 75 Hz.


There have been a number of serious claims that we are entering an age of abundance - like Rifkin’s “Zero Marginal Cost Society” - this article is a significant signal that this is the case - The consequence is that we need to develop a radically new economic framework and economic theory.
Chinese manufacturing has become so efficient that a new polar fleece blanket costs a mere $2.50 retail -- compared to $2.00 for a recycled blanket.
Between 2000 and 2015, global clothing production doubled, while the average number of times that a garment was worn before disposal declined by 36 percent. In China, it declined by 70 percent.

No One Wants Your Used Clothes Anymore

A once-virtuous cycle is breaking down. What now?
For decades, the donation bin has offered consumers in rich countries a guilt-free way to unload their old clothing. In a virtuous and profitable cycle, a global network of traders would collect these garments, grade them, and transport them around the world to be recycled, worn again, or turned into rags and stuffing.

Now that cycle is breaking down. Fashion trends are accelerating, new clothes are becoming as cheap as used ones, and poor countries are turning their backs on the secondhand trade. Without significant changes in the way that clothes are made and marketed, this could add up to an environmental disaster in the making.

Nobody is more alert to this shift than the roughly 200 businesses devoted to recycling clothes into yarn and blankets in Panipat, India. Located 55 miles north of Delhi, the dusty city of 450,000 has served as the world's largest recycler of woolen garments for at least two decades, becoming a crucial outlet for the $4 billion used-clothing trade.

Panipat's mills specialize in a cloth known as shoddy, which is made from low-quality yarn recycled from woolen garments. Much of what they produce is used to make cheap blankets for disaster-relief operations. It's been a good business: At its peak in the early 2010s, Panipat's shoddy manufacturers could make 100,000 blankets a day, accounting for 90 percent of the relief-blanket market.


Another movement in fundamental science and theory - this is a good summary of current debates regarding evolutionary theory.
Edward O Wilson claimed that human culture is held on a genetic leash. The metaphor was contentious for two reasons. First, as we’ll see, it’s no less true that culture holds genes on a leash. Second, while there must be a genetic propensity for cultural learning, few cultural differences can be explained by underlying genetic differences.
In a single mating season, ‘fads’ can develop in the qualities that individuals find attractive in their partners

Evolution unleashed

Is evolutionary science due for a major overhaul – or is talk of ‘revolution’ misguided?
If you are not a biologist, you’d be forgiven for being confused about the state of evolutionary science. Modern evolutionary biology dates back to a synthesis that emerged around the 1940s-60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet as novel ideas flood in from genomics, epigenetics and developmental biology, most evolutionists agree that their field is in flux. Much of the data implies that evolution is more complex than we once assumed.

Some evolutionary biologists, myself included, are calling for a broader characterisation of evolutionary theory, known as the extended evolutionary synthesis (EES). A central issue is whether what happens to organisms during their lifetime – their development – can play important and previously unanticipated roles in evolution. The orthodox view has been that developmental processes are largely irrelevant to evolution, but the EES views them as pivotal. Protagonists with authoritative credentials square up on both sides of this debate, with big-shot professors at Ivy League universities and members of national academies going head-to-head over the mechanisms of evolution. Some people are even starting to wonder if a revolution is on the cards.


This is a whole new way to think of ‘brain imaging’ - focusing on image content - where will this go in the next couple of decades? Two short video are totally fascinating.

This Neural Network Built by Japanese Researchers Can ‘Read Minds’

It already seems a little like computers can read our minds; features like Google’s auto-complete, Facebook’s friend suggestions, and the targeted ads that appear while you’re browsing the web sometimes make you wonder, “How did they know?” For better or worse, it seems we’re slowly but surely moving in the direction of computers reading our minds for real, and a new study from researchers in Kyoto, Japan is an unequivocal step in that direction.

A team from Kyoto University used a deep neural network to read and interpret people’s thoughts. Sound crazy? This actually isn’t the first time it’s been done. The difference is that previous methods—and results—were simpler, deconstructing images based on their pixels and basic shapes. The new technique, dubbed “deep image reconstruction,” moves beyond binary pixels, giving researchers the ability to decode images that have multiple layers of color and structure.


On the other hand - here’s another way to think about neural imaging.

Dream machines: how IT is changing the world of neuroscience

We talk to computer scientist and entrepreneur Jamil El Imad about the cutting-edge intersection of neuroscience and IT
Floating before her eyes is a menu and the words “Choose your dream”. She sees a range of scenarios: a Buddhist monastery high in the Himalayas, the bright white sands of a deserted Hebridean beach, a steaming Icelandic hot spring, a fragrant Californian redwood grove. With a nod, Emma selects an Alpine meadow, and enters the dream scenario.

At first she sees nothing but a drifting white mist, but as she relaxes, feeling the tension draining from her neck and shoulders, her heart rate slows, her breathing becomes shallower, and the fog begins to part. She sees first a carpet of wildflowers spreading out before her. As she concentrates, the mist rolls back to reveal the full scene. She looks up at the clear sky and sees birds overhead. She hears the mountain breeze and cowbells in the distance. A valley somewhere in Austria is spread out before her. Emma sits back on her sofa, and feels herself like a feather on the wind, a thousand miles from her troubles.

It sounds like the opening to a Philip K Dick novel or a treatment for the next season of Black Mirror, but actually the technology Emma might one day use exists right now in prototype form.

It’s called the Dream Machine, it’s designed to improve mindfulness and concentration, and it’s the brainchild of computer scientist and serial entrepreneur Jamil El Imad. It is the result of his work at Switzerland’s École Polytechnique Fédérale de Lausanne (EPFL) on the Human Brain Project, a multi-year programme that is bringing together researchers from across Europe to advance the fields of neuroscience and computing.


This is another fascinating signal in the continually evolving knowledge of DNA and living systems.

Brain Cells Share Information With Virus-Like Capsules

The Arc gene, which is critical for animals’ ability to learn from experiences, has an incredible origin story.
...a gene called Arc which is active in neurons, and plays a vital role in the brain. A mouse that’s born without Arc can’t learn or form new long-term memories. If it finds some cheese in a maze, it will have completely forgotten the right route the next day. “They can’t seem to respond or adapt to changes in their environment,” says Shepherd, who works at the University of Utah, and has been studying Arc for years. “Arc is really key to transducing the information from those experiences into changes in the brain.”

Despite its importance, Arc has been a very difficult gene to study. Scientists often work out what unusual genes do by comparing them to familiar ones with similar features—but Arc is one-of-a-kind. Other mammals have their own versions of Arc, as do birds, reptiles, and amphibians. But in each animal, Arc seems utterly unique—there’s no other gene quite like it. And Shepherd learned why when his team isolated the proteins that are made by Arc, and looked at them under a powerful microscope.

He saw that these Arc proteins assemble into hollow, spherical shells that look uncannily like viruses. “When we looked at them, we thought: What are these things?” says Shepherd. They reminded him of textbook pictures of HIV, and when he showed the images to HIV experts, they confirmed his suspicions. That, to put it bluntly, was a huge surprise. “Here was a brain gene that makes something that looks like a virus,” Shepherd says.

Scientists have in recent years discovered several ways that animals have used the properties of virus-related genes to their evolutionary advantage. Gag moves genetic information between cells, so it’s perfect as the basis of a communication system. Viruses use another gene called env to merge with host cells and avoid the immune system. Those same properties are vital for the placenta—a mammalian organ that unites the tissues of mothers and babies. And sure enough, a gene called syncytin, which is essential for the creation of placentas, actually descends from env. Much of our biology turns out to be viral in nature.


While this is focused on mice - it does look like a good weak signal in the progress toward understand aging.

Study Pinpoints Potential “Master Regulator” of Age-Related Cognitive Decline

Upping a gene’s expression in rat brains made them better learners and normalized the activity of hundreds of other genes to resemble the brains of younger animals.
For more than three decades, Philip Landfield has been chipping away at a central question, namely, “why the electrophysiology of hippocampal connections is impaired in aged animals,” as the University of Kentucky neuroscientist puts it. It’s far from an esoteric problem, given that electrical impulses sent across neuronal connections strengthens synapses over time, forming the physical basis of learning and memory—meaning that less-efficient electrical transmissions are linked to cognitive decline.

With a study published today (December 18) in the Journal of Neuroscience, Landfield says he thinks his team is now close to finally getting to the bottom of the phenomenon that caught his attention in the late 1970s. The answer, according to the new study, involves a family of genes known as FKBP and its regulation of calcium release within neurons. What they found was that increasing expression of one of the genes enhanced the rats’ learning ability and altered the expression levels of hundreds of other genes normally affected by aging, bringing them back to activities typical of younger animals.

“We’re . . . fascinated by the fact that just restoring this one molecule can reverse so many aspects of brain aging,” Landfield says.

Among the study’s implications, notes Gregory Rose, a neuroscientist at Southern Illinois University who served as one of the paper’s peer reviewers, is that it casts doubt on a now-popular idea that neuroinflammation is key to the symptoms of Alzheimer’s disease. The researchers “normalized a lot of a gene expression pattern, but equally importantly, what was not normalized were any of the genes that are upregulated with aging that have to do with neuroinflammation,” Rose tells The Scientist, suggesting that “if people are looking for symptomatic relief [for Alzheimer’s], reducing neuroinflammation is not going to solve the problem.”


The looming antibacterial-resistance crisis - may have some breakthroughs in the near future - this is one of them.
Historically, it’s a search riddled with accidental discoveries. The fungal strain that was used to manufacture penicillin turned up on a moldy cantaloupe; quinolones emerged from a bad batch of quinine; microbiologists first isolated bacitracin, a key ingredient in Neosporin ointment, from an infected wound of a girl who had been hit by a truck. Other antibiotics turned up in wild, far-flung corners of the globe: Cephalosporin came from a sewage pipe in Sardinia; erythromycin, the Philippines; vancomycin, Borneo; rifampicin, the French Riviera; rapamycin, Easter Island. By persuading the right microbes to grow under the right condition, we unearthed medicinal chemistry that beat back our own microscopic enemies. But despite technological advances in robotics and chemical synthesis, researchers kept rediscovering many of the same easy-to-isolate antibiotics, earning the old-school method a derisive nickname: “grind and find.”

HOW DIRT COULD SAVE HUMANITY FROM AN INFECTIOUS APOCALYPSE

Brady is creating drugs from dirt. He’s certain that the world’s topsoils contain incredible, practically inexhaustible reservoirs of undiscovered antibiotics, the chemical weapons bacteria use to fend off other microorganisms. He’s not alone in this thinking, but the problem is that the vast majority of bacteria cannot be grown in the lab—a necessary step in cultivating antibiotics.

Brady has found a way around this roadblock, which opens the door to all those untapped bacteria that live in dirt. By cloning DNA out of a kind of bacteria-laden mud soup, and reinstalling these foreign gene sequences into microorganisms that can be grown in the lab, he’s devised a method for discovering antibiotics that could soon treat infectious diseases and fight drug-resistant superbugs. In early 2016, Brady launched a company called Lodo Therapeutics (lodo means mud in Spanish and Portuguese) to scale up production and ultimately help humanity outrun infectious diseases nipping at our heels. Some colleagues call his approach “a walk in the park.” Indeed, his lab recently dispatched two groups of student volunteers to collect bags full of dirt at 275 locations around New York City.

Using high-throughput DNA sequencing, scientists then searched these libraries and their census turned up such astronomical biodiversity that they began adding new branches to the tree of life. By some estimates, the earth harbors more than a trillion individual microbe species. A single gram of soil alone can contain 3,000 bacterial species, each with an average of four million base-pairs of DNA spooled around a single circular chromosome. The next steps followed a simple logic: Find novel genetic diversity, and you’ll inevitably turn up new chemical diversity.

Thursday, January 18, 2018

Friday Thinking 19 Jan. 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content
Quotes:

Articles:




... Deep Learning is in fact the stepping stone tool that other cognitive tools will leverage to achieve higher levels of cognition. We’ve already seen this in DeepMind’s AlphaZero playing systems where conventional tree search is used in conjunction with Deep Learning. Deep Learning is the wheel of cognition. Just as the wheel enabled more effective transportation, so will Deep Learning achieve effective artificial intelligence.

Politics in science has always been present and its not going to disappear any time soon. We are familiar with the feud between Nicholai Tesla and Thomas Edison. Edison died a wealthy man, in stark contrast to Tesla who died penniless. Yet the scientific contributions of Tesla arguable surpasses Edisons. Yet, Edison is famous today and Tesla is likely only well known because an electric car company is named after him.

The Boogeyman Argument that Deep Learning will be Stopped by a Wall




IN 2015, WHEN Lazarus Liu moved home to China after studying logistics in the United Kingdom for three years, he quickly noticed that something had changed: Everyone paid for everything with their phones. At McDonald’s, the convenience store, even at mom-and-pop restaurants, his friends in Shanghai used mobile payments. Cash, Liu could see, had been largely replaced by two smartphone apps: Alipay and WeChat Pay. One day, at a vegetable market, he watched a woman his mother’s age pull out her phone to pay for her groceries. He decided to sign up.

To get an Alipay ID, Liu had to enter his cell phone number and scan his national ID card. He did so reflexively. Alipay had built a reputation for reliability, and compared to going to a bank managed with slothlike indifference and zero attention to customer service, signing up for Alipay was almost fun. With just a few clicks he was in. Alipay’s slogan summed up the experience: “Trust makes it simple.”

Alipay turned out to be so convenient that Liu began using it multiple times a day, starting first thing in the morning, when he ordered breakfast through a food delivery app. He realized that he could pay for parking through Alipay’s My Car feature, so he added his driver’s license and license plate numbers, as well as the engine number of his Audi. He started making his car insurance payments with the app. He booked doctors’ appointments there, skipping the chaotic lines for which Chinese hospitals are famous. He added friends in Alipay’s built-in social network. When Liu went on vacation with his fiancée (now his wife) to Thailand, they paid at restaurants and bought trinkets with Alipay. He stored whatever money was left over, which wasn’t much once the vacation and car were paid for, in an Alipay money market account. He could have paid his electricity, gas, and internet bills in Alipay’s City Service section. Like many young Chinese who had become enamored of the mobile payment services offered by Alipay and WeChat, Liu stopped bringing his wallet when he left the house.

INSIDE CHINA'S VAST NEW EXPERIMENT IN SOCIAL RANKING




real cities are their own reasons for existing. If it only exists to serve a function in the broader world, it's a town, not a city. What real cities do -- whether finance, or tech, or energy, or governance -- is not who they are. The history of any major city illustrates this. San Francisco was about the gold rush before it was about tech. Seattle was about fur, fish, and lumber before it was about Boeing or Amazon. New York was about textiles before it was about high finance. And all of them are always, first and foremost, about themselves. About their unique psychological identities.

The current global-macroeconomy "job" of a city has only a weak correlation with its essential nature.

In fact, it is cities, not nations, that best fit the formula "Make X Great Again." Great cities are longer-lived than nations and empires. Often they are effectively immortal. When nations experience multiple chapters of greatness, it is usually traceable to chapters of greatness in one or more of their great, immortal cities. Long-lived cities are make-ourselves-great-again engines. They keep finding new ways of continuing the game of being themselves rather than trying to win a particular economic era (ie for you Carseans out there, great cities are infinite-game players).

And greatness is never about the jobs being done at any given time, whether you're talking individuals, cities, or nations.

Why Cities Fail






This is a clear signal of the emerging change in energy geopolitics - a longish article - but worth it for anyone interested in the change in conditions of change.
“What I like about the job is that it is about much more than energy,” says Šefčovič. “It came from [European Commission president] Juncker’s idea to have a much more horizontal, cross-cutting approach in energy policy, which means I am working with a team of fourteen commissioners in my Energy Union portfolio. If I have to sum up what the job is about: it is making sure that we are energizing and modernizing the European economy.”

Interview with Maros Šefčovič: Energy Union is “Deepest Transformation of Energy Systems Since Industrial Revolution”

Before the next European elections in 2019, Maroš Šefčovič , the European Commission’s Vice-President for the Energy Union, wants to have a new legal framework in place which will “bring in the most comprehensive and deepest transformation of energy systems in Europe, since the [industrial revolution] one hundred and fifty years ago.” In an exclusive interview with Energy Post, he says that the success of the Energy Union project “will decide the place of Europe on the geopolitical and economic map of the 21st Century”. Renewables, decentralized energy, digitalization and smart grids will be the “backbone of the new modern economy in Europe.” On the controversy over Nord Stream 2, Gazprom’s pipeline project, Šefčovič says he wants to resolve the issue through “negotiations”.


This is a very long read - but also worth it for anyone interested in a seasoned, intelligent foresight analysis by Rodney Brookes.

MY DATED PREDICTIONS

With all new technologies there are predictions of how good it will be for humankind, or how bad it will be. A common thread that I have observed is how people tend to underestimate how long new technologies will take to be adopted after proof of concept demonstrations. I pointed to this as the seventh of seven deadly sins of predicting the future of AI.

For example, recently the early techno-utopianism of the Internet providing a voice to everyone and thus blocking the ability of individuals to be controlled by governments has turned to depression about how it just did not work out that way. And there has been discussion of how the good future we thought we were promised is taking much longer to be deployed than we had ever imagined. This is precisely a realization of the early optimism about how things would be deployed and used did just not turn out to be.

Over the last few months I have been throwing a little cold water over what I consider to be current hype around Artificial Intelligence (AI) and Machine Learning (ML). However, I do not think that I am a techno-pessimist. Rather, I think of myself as a techno-realist…...


This is interesting an analysis pointing to the 'unfixable' nature of broken platforms - because of the fundamental nature of their business models - the focus is on FB - but it much the same for all platforms - except Wikimedia - imagine if Facebook had chosen to become a foundation the way Wikimedia has?
We love to think our corporate heroes are somehow super human, capable of understanding what’s otherwise incomprehensible to mere mortals like the rest of us. But Facebook is simply too large an ecosystem for one person to fix. And anyway, his hands are tied from doing so. So instead, he’s doing what people (especially engineers) always do when the problem is so existential they can’t wrap their minds around it: He’s redefining the problem and breaking it into constituent parts.

Facebook Can’t Be Fixed.

Facebook’s core problem is not foreign interference, spam bots, trolls, or fame mongers. It’s the company’s core business model, and abandoning it is not an option.
In his short but impactful post, Zuckerberg notes that when he started doing personal challenges in 2009, Facebook did not have “a sustainable business model,” so his first pledge was to wear a tie all year, so as to focus himself on finding that model.

He sure as hell did find that model: data-driven audience-based advertising, but more on that in a minute. In his post, Zuckerberg notes that 2018 feels “a lot like that first year,” adding “Facebook has a lot of work to do — whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent….My personal challenge for 2018 is to focus on fixing these important issues.”

The post is worthy of a doctoral dissertation. I’ve read it over and over, and would love, at some point, to break it down paragraph by paragraph. Maybe I’ll get to that someday, but first I want to emphatically state something it seems no one else is saying (at least not in mainstream press coverage of the post):
You cannot fix Facebook without completely gutting its advertising-driven business model.


This is interesting - it could be a weak signal for the emergence of new institutions - like Auditor Generals of Algorithms.
I don't think it is possible to guarantee prevention of harm in our algorithms - for the same reason that one can't presate the phase space of the future evolution of the biosphere and thus can't predict the future of evolution.
But what we can do - and have always done - is build robust, response-able, timely recourse systems to address and correct.
“The topics that we’re concerned with are how do you scrutinise an algorithm; how do you hold an algorithm accountable when it’s making very important decisions that actually affect the experiences and life outcomes of people,” Suleyman says. “We want these systems in production to be our highest collective selves. We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last sixty years.”

DeepMind's new AI ethics unit is the company's next big move

Google-owned DeepMind has announced the formation of a major new AI research unit comprised of full-time staff and external advisors
As we hand over more of our lives to artificial intelligence systems, keeping a firm grip on their ethical and societal impact is crucial. For DeepMind, whose stated mission is to “solve intelligence”, that task will be the work of a new initiative tackling one of the most fundamental challenges of the digital age: technology is not neutral.
DeepMind Ethics & Society (DMES), a unit comprised of both full-time DeepMind employees and external fellows, is the company’s latest attempt to scrutinise the societal impacts of the technologies it creates. In development for the past 18 months, the unit is currently made up of around eight DeepMind staffers and six external, unpaid fellows. The full-time team within DeepMind will swell to around 25 people within the next 12 months.

Headed by technology consultant Sean Legassick and former Google UK and EU policy manager and government adviser Verity Harding, DMES will work alongside technologists within DeepMind and fund external research based on six areas: privacy transparency and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world’s challenges. Within those broad themes, some of the specific areas addressed will be algorithmic bias, the future of work and lethal autonomous weapons. Its aim, according to DeepMind, is twofold: to help technologists understand the ethical implications of their work and help society decide how AI can be beneficial.

For DeepMind co-founder Mustafa Suleyman, it’s a significant moment. “We’re going to be putting together a very meaningful team, we’re going to be funding a lot of independent research,” he says when we meet at the firm’s London headquarters. Suleyman is bullish about his company’s efforts to not just break new frontiers in artificial intelligence technology, but also keep a grip on the ethical implications. “We’re going to be collaborating with all kinds of think tanks and academics. I think it’s exciting to be a company that is putting sensitive issues, proactively, up-front, on the table, for public discussion.”

To explain where the idea for DMES came from, Suleyman looks back to before the founding of DeepMind in 2010. “My background before that was pretty much seven or eight years as an activist,” he says. An Oxford University drop-out at the age of 19, Suleyman went on to found a telephone counselling service for young Muslims before working as an advisor to then Mayor of London Ken Livingstone, followed by spells at the UN, the Dutch government and WWF. He explains his ambition thusly, “How do you get people who speak very different social languages to put purpose ahead of profit in the heart of their organisations and coordinate effectively?”


The use of big data, and AI is emerging as new forms of diagnostic tools - this one is cheaper, easier and better.
“To see in many dimensions at the same time is very difficult. With machine learning, it becomes easy”

Colonoscopy? How About a Blood Test?

Israel’s Medial offers less-invasive, more data-driven assessments to resistant patients.
An Israeli health-tech company is trying to use machine learning software to do just that. ColonFlag is the first product from Medial EarlySign, and while poorly named, the software predicts colon cancer twice as well as the fecal exam that’s the industry-standard colonoscopy alternative, according to a 2016 study published in the Journal of the American Medical Informatics Association. ColonFlag compares new blood tests against a patient’s previous diagnostics, as well as Medial’s proprietary database of 20 million anony-​mized tests spanning three decades and three continents, to evaluate the patient’s likelihood of harboring cancer. Israel’s second-largest health maintenance organization is already using the software, and Medial (a mashup of “medical” and “algorithms”) is working with Kaiser Permanente and two leading U.S. hospitals to develop other uses for its database and analysis tools.

“Our algorithms can automatically scan all the patient parameters and detect subtle changes over time to find correlative patterns for outcomes we want to predict,” Nir Kalkstein, Medial’s co-founder and chief technology officer, says, characteristically clinical. The database allows his team “to find similar events in the past and then identify from the data correlations that can predict these events.”

Other companies are building massive databases with an eye toward predictive medicine, including heavy hitters such as DeepMind Technologies, owned by Google parent Alphabet Inc. In Boulder, Colo., startup SomaLogic Inc. is predicting heart attacks based on combinations of certain proteins in cardiac patients. In Salt Lake City, Myriad Genetics Inc.assesses hereditary cancer risks based on DNA profiles.


Moore’s Law is Dead - Long Live Moore’s Law. This is a strong signal of the emerging computational paradigms that include memristors, quantum, and DNA just to name a few. There’s a 1 min video.

Intel’s New Self-Learning Chip Promises to Accelerate Artificial Intelligence

Intel Introduces First-of-Its-Kind Self-Learning Chip Codenamed Loihi
Imagine a future where complex decisions could be made faster and adapt over time. Where societal and industrial problems can be autonomously solved using learned experiences.

It’s a future where first responders using image-recognition applications can analyze streetlight camera images and quickly solve missing or abducted person reports.
It’s a future where stoplights automatically adjust their timing to sync with the flow of traffic, reducing gridlock and optimizing starts and stops.

It’s a future where robots are more autonomous and performance efficiency is dramatically increased.

An increasing need for collection, analysis and decision-making from highly dynamic and unstructured natural data is driving demand for compute that may outpace both classic CPU and GPU architectures. To keep pace with the evolution of technology and to drive computing beyond PCs and servers, Intel has been working for the past six years on specialized architectures that can accelerate classic compute platforms. Intel has also recently advanced investments and R&D in artificial intelligence (AI) and neuromorphic computing.


This is an fascinating confirmation of some aspects of our biological foundations of our moral and social fabric.
"The findings give us a glimpse into what is the nature of morality," said Dr. Marco Iacoboni, director of the Neuromodulation Lab at UCLA's Ahmanson-Lovelace Brain Mapping Center and the study's senior author. "This is a foundational question to understand ourselves, and to understand how the brain shapes our own nature."

Mirror neuron activity predicts people's decision-making in moral dilemmas, study finds

It is wartime. You and your fellow refugees are hiding from enemy soldiers, when a baby begins to cry. You cover her mouth to block the sound. If you remove your hand, her crying will draw the attention of the soldiers, who will kill everyone. If you smother the child, you'll save yourself and the others.

If you were in that situation, which was dramatized in the final episode of the '70s and '80s TV series "M.A.S.H.," what would you do?

The results of a new UCLA study suggest that scientists could make a good guess based on how the brain responds when people watch someone else experience pain. The study found that those responses predict whether people will be inclined to avoid causing harm to others when facing moral dilemmas.

Iacoboni and his colleagues hypothesized that people who had greater neural resonance than the other participants while watching the hand-piercing video would also be less likely to choose to silence the baby in the hypothetical dilemma, and that proved to be true. Indeed, people with stronger activity in the inferior frontal cortex, a part of the brain essential for empathy and imitation, were less willing to cause direct harm, such as silencing the baby.

But the researchers found no correlation between people's brain activity and their willingness to hypothetically harm one person in the interest of the greater good—such as silencing the baby to save more lives. Those decisions are thought to stem from more cognitive, deliberative processes.

The study confirms that genuine concern for others' pain plays a causal role in moral dilemma judgments, Iacoboni said. In other words, a person's refusal to silence the baby is due to concern for the baby, not just the person's own discomfort in taking that action.


The human genome project began in 1990 intending to be a 15 year effort. Halfway through the project had sequence only 1% of the human genome. The power of learning exponential technology progress - enabled the project to be completed in 2003 - two years ahead of schedule.
The project to map the ‘Connectome’ is facing similar challenges and trajectories. The challenge of shifting to a team approach is increased by continued reliance on traditional siloed disciplines.
In July 2016, an international team published a map of the human brain's wrinkled outer layer, the cerebral cortex1. Many scientists consider the result to be the most detailed human brain-connectivity map so far. Yet, even at its highest spatial resolution (1 cubic millimetre), each voxel — the smallest distinguishable element of a 3D object — contains tens of thousands of neurons. That's a far cry from the neural connections that have been mapped at single-cell resolution in the fruit fly.

“In case you thought brain anatomy is a solved problem, take it from us — it isn't,” says Van Wedeen, a neuroscientist at Massachusetts General Hospital in Charlestown and a principal investigator for the Human Connectome Project (HCP), a US-government-funded global consortium that published the brain map.

Neuroscience: Big brain, big data

Neuroscientists are starting to share and integrate data — but shifting to a team approach isn't easy.
As big brain-mapping initiatives go, Taiwan's might seem small. Scientists there are studying the humble fruit fly, reverse-engineering its brain from images of single neurons. Their efforts have produced 3D maps of brain circuitry in stunning detail.

Researchers need only a computer mouse and web browser to home in on individual cells and zoom back out to intertwined networks of nerve bundles. The wiring diagrams look like colourful threads on a tapestry, and they're clear enough to show which cell clusters control specific behaviours. By stimulating a specific neural circuit, researchers can cue a fly to flap its left wing or swing its head from side to side — feats that roused a late-afternoon crowd in November at the annual meeting of the Society for Neuroscience in San Diego, California.

But even for such a small creature, it has taken the team a full decade to image 60,000 neurons, at a rate of 1 gigabyte per cell, says project leader Ann-Shyn Chiang, a neuroscientist at the National Tsing Hua University in Hsinchu City, Taiwan — and that's not even half of the nerve cells in the Drosophila brain. Using the same protocol to image the 86 billion neurons in the human brain would take an estimated 17 million years, Chiang reported at the meeting.

But brain mapping and DNA sequencing are different beasts. A single neuroimaging data set can measure in the terabytes — two to three orders of magnitude larger than a complete mammalian genome. Whereas geneticists know when they've finished decoding a stretch of DNA, brain mappers lack clear stopping points and wrestle with much richer sets of imaging and electrophysiological data — all the while wrangling over the best ways to collect, share and interpret them. As scientists develop tools to share and analyse ever-expanding neuroscience data sets, however, they are coming to a shared realization: cracking the brain requires a concerted effort.


Another signal in the emerging transformation of energy geopolitics.

Britain Now Generates Twice as Much Electricity from Wind as Coal

Just six years ago, more than 40% of Britain’s electricity was generated by burning coal. Today, that figure is just 7%. Yet if the story of 2016 was the dramatic demise of coal and its replacement by natural gas, then 2017 was most definitely about the growth of wind power,

Wind provided 15% of electricity in Britain last year (Northern Ireland shares an electricity system with the Republic and is calculated separately), up from 10% in 2016. This increase, a result of both more wind farms coming online and a windier year, helped further reduce coal use and also put a stop to the rise in natural gas generation.


This is a good signal in the progress of domesticating DNA and in fighting climate change.
"This could be an important breakthrough in biotechnology. It should be possible to optimise the system still further and finally develop a `microbial cell factory' that could be used to mop up carbon dioxide from many different types of industry.
"Not all bacteria are bad. Some might even save the planet."

A biological solution to carbon capture and recycling?

E. coli bacteria shown to be excellent at CO2 conversion
Scientists at the University of Dundee have discovered that E. coli bacteria could hold the key to an efficient method of capturing and storing or recycling carbon dioxide.

Professor Frank Sargent and colleagues at the University of Dundee's School of Life Sciences, working with local industry partners Sasol UK and Ingenza Ltd, have developed a process that enables the E. coli bacterium to act as a very efficient carbon capture device.

"For example, the E. coli bacterium can grow in the complete absence of oxygen. When it does this it makes a special metal-containing enzyme, called 'FHL', which can interconvert gaseous carbon dioxide with liquid formic acid. This could provide an opportunity to capture carbon dioxide into a manageable product that is easily stored, controlled or even used to make other things. The trouble is, the normal conversion process is slow and sometime unreliable.

"What we have done is develop a process that enables the E. coli bacterium to operate as a very efficient biological carbon capture device. When the bacteria containing the FHL enzyme are placed under pressurised carbon dioxide and hydrogen gas mixtures -- up to 10 atmospheres of pressure -- then 100 per cent conversion of the carbon dioxide to formic acid is observed. The reaction happens quickly, over a few hours, and at ambient temperatures.


The link between our microbial profile, our diet and our conditions continues to unfold. This article is worth reading.

Treating Disease by Nudging the Microbes Inside Us

We’ve spent centuries trying to kill bacteria. Now, scientists have shown that subtler approaches can work—at least in mice.
the links between microbes and poor health can be more complicated. Our bodies are naturally home to tens of trillions of bacteria. Most are benign, or even beneficial. But often, these so-called microbiomes can shift into a negative state. For example, inflamed guts tend to house an unusually large number of bacteria from the Enterobacteriaceae family (pronounced En-ter-oh-back-tee-ree-ay-see-ay, and hereafter just “enteros”). There’s no villain in this scenario, no single antagonist as there would be in the case of tuberculosis or cholera. The enteros are part of a normal gut; it’s the same old community, just altered.

These kinds of shifts are harder to rectify. For a start, it’s often unclear if the enteros cause the inflammation, if the inflammation changes the microbes, or both. Even if the microbes are responsible, how do you fix that? Dietary changes are typically too imprecise. Antibiotics are too crude, killing off beneficial microbes while suppressing the problematic ones.

But Sebastian Winter, from the University of Texas Southwestern Medical Center, has an alternative. His team showed that the blooming enteros rely on enzymes that, in turn, depend on the metal molybdenum. A related metal—tungsten—can take the place of molybdenum, and stop those enzymes from working properly.

By feeding mice small amounts of tungsten salts, Winter’s team managed to specifically prevent the growth of enteros, while leaving other microbes unaffected. Best of all, the tungsten treatment spared the enteros under normal conditions, suppressing them only in the context of an inflamed gut. It’s a far more precise and subtle way of changing the microbiome than, say, blasting it with antibiotics. It involves gentle nudges rather than killing blows.


New ‘wonders of the world’ in the making. The 2.5 min video explains it all.

Casting a $20 Million Mirror for the World’s Largest Telescope

The glass arcs that will let astronomers peer back millions of years are decades in the making
Building a mirror for any giant telescope is no simple feat. The sheer size of the glass, the nanometer precision of its curves, its carefully calculated optics, and the adaptive software required to run it make this a task of herculean proportions. But the recent castings of the 15-metric ton, off-axis mirrors for the Giant Magellan Telescope (GMT) forced engineers to push the design and manufacturing process beyond all previous limits.

Building the GMT is not a task of years, but of decades. The Giant Magellan Telescope Organization (GMTO) and a team at the University of Arizona's Richard F. Caris Mirror Laboratory cast the first of seven mirrors back in 2005; they expect to complete construction of the telescope in 2025. Once complete, it’s expected to be the largest telescope in the world. The seven 8.4-meter-wide mirrors will combine to serve as a 24.5-meter mirror telescope with 10 times the resolution of the Hubble Space Telescope. This will allow astronomers to gaze back in time to, they hope, the materialization of galaxies.


Optical and Olfactory Illusions? This is another fun challenge to the notion of our capacity to perceive an objective world.
"The Skittles people, being much smarter than most of us, recognized that it is cheaper to make things smell and look different than it is to make them actually taste different."
Katz continues: "So, Skittles have different fragrances and different colors — but they all taste exactly the same."

Are Gummy Bear Flavors Just Fooling Our Brains?

Don Katz is a Brandeis University neuropsychologist who specializes in taste. "I have a colleague in the U.K., Charles Spence, who did the most wonderful experiment," Katz says. "He took normal college students and gave them a row of clear beverages in clear glass bottles. The beverages had fruit flavorings. One was orange, one was grape, apple, lemon."

Spence, who is also the author of the 2017 book Gastrophysics: The New Science of Eating, says he has "always been interested in how the senses affect one another" and conducted the experiment because "there is perhaps nothing more multisensory than flavor perception."

According to Katz, the college students did a great job of differentiating between the flavors of the clear liquid.
"But then he added food coloring," Katz says. "The 'wrong' food coloring for the liquid."

So, the grape-flavored liquid was then colored orange, for example.
"While I wouldn't say they went to chance, their ability to tell which was which got really subpar all of the sudden," Katz says. "The orange beverage tasted orange [to them]. The yellow beverage tasted like lemonade. There wasn't a thing they could do about it."

It was so powerful that even when Spence told the students that it was his job as a scientist to mess with the conditions and asked them to just tell him what they tasted without considering the color, they still couldn't do it.


And here’s some good news for carnivores and bacon-lovers.

U.K. firm touts bacon breakthrough

Recipe uses fruit, spice extracts in place of cancer-linked nitrites
There’s good news for people whose New Year’s resolution is to eat more bacon. This week, the British sausage maker Finnebrogue will introduce bacon rashers made without added nitrites, which have been linked to cancer.

The new English breakfast component will be available only to shoppers in the U.K., who consume, on average, more than 3 kg of bacon per year. Finnebrogue says its Naked Bacon is made with natural fruit and spice extracts.

The firm spent 10 years and $19 million to develop the recipe with the Spanish food ingredients firm Prosur. It claims that the product has beaten the competition in taste tests.