Thursday, May 10, 2018

Friday Thinking 11 May 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:



While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways.

Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.

And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design.

… let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits.

It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms.

We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA (Intelligence Augmentation) and II.

Artificial Intelligence — The Revolution Hasn’t Happened Yet



As recently as two decades ago, most people would have thought it absurd to countenance a free and open encyclopaedia, produced by a community of dispersed enthusiasts primarily driven by other motives than profit-maximisation, and the idea that this might displace the corporate-organised Encyclopaedia Britannica and Microsoft Encarta would have seemed preposterous. Similarly, very few people would have thought it possible that the top 500 supercomputers and the majority of websites would run on software produced in the same way, or that non-coercive cooperation using globally shared resources could produce artifacts as effectively as those produced by industrial capitalism, but more sustainably. It would have been unimaginable that such things should have been created through processes that were far more pleasant than the work conditions that typically result in such products.

Commons-based production goes against many of the assumptions of mainstream, standard-textbook economists. Individuals primarily motivated by their interest to maximise profit, competition and private property are the Holy Grail of innovation and progress – more than that: of freedom and liberty themselves. One should never forget these two everlasting ‘truths’ if one wants to understand the economy and the world, we are told. These are the two premises of the free-market economics that have dominated the discourse until today.

Already a decade ago (when smartphones were a novelty), Benkler argued in The Wealth of Networks (2006) that a new mode of production was emerging that would shape how we produce and consume information. He called this mode ‘commons-based peer production’ and claimed that it can deliver better artifacts while promoting another aspect of human nature: social cooperation. Digitisation does not change the human person (in this respect), it just allows her to develop in ways that had previously been blocked, whether by chance or design.

No matter where they are based, people today can use the internet to cooperate and globally share the products of their cooperation as a commons. Commons-based peer production (usually abbreviated as CBPP) is fundamentally different from the dominant modes of production under industrial capitalism. In the latter, owners of means of production hire workers, direct the work process, and sell products for profit-maximisation. Think how typical multinational corporations are working. Such production is organised by allocating resources through the market (pricing) and through hierarchical command. In contrast, CBPP is in principle open to anyone with the relevant skills to contribute to a common project: the knowledge of every participant is pooled.

Utopia now




For those who just got here, "Old people in big cities afraid of the sky" is my one-line description of mid-21st century life.  "Aging demographics, global urbanization, climate disasters." Already here, will be much further distributed

Bruce Sterling on Twitter




I’m still asking myself the same question that I asked myself ten years ago: "What is going on in my community?" I work in the foundations of physics, and I see a lot of strange things happening there. When I look at the papers that are being published, many of them seem to be produced simply because papers have to be produced. They don’t move us forward in any significant way. I get the impression that people are working on them not so much because it’s what they’re interested in but because they have to produce outcomes in a short amount of time. They sit on short-term positions and have short-term contracts, and papers must be produced.

If that is the case, then you work on what’s easy to do and what can quickly be finished. Of course, that is not a new story. I believe it explains a lot of what I see happening in my field and in related fields. The ideas that survive are the ideas that are fruitful in the sense of quickly producing a lot of publications, and that’s not necessarily correlated with these ideas being important to advancing science.

...For this we need science to work properly. First of all, to get this done will require that we understand better how science works. I find it ironic that we have models for how political systems work. We have voting models. We have certain understanding for how these things go about.

We also have a variety of models for the economic system and for the interaction with the political system. But we pretty much know nothing about the dynamics of knowledge discovery. We don’t know how the academic system works, for how people develop their ideas, for how these ideas get selected, for how these ideas proliferate. We don’t have any good understanding of how that works. That will be necessary to solve these problems. We will also have to get this knowledge about how science works closer to the people who do the science. To work in this field, you need to have an education for how knowledge discovery works and what it takes to make it work properly. And that is currently missing.

Looking in the Wrong Places




This is an excellent account of our current media environment - the message can no longer be controlled - and the environment is rife with what seems like self-replicating meme-propaganda. Perhaps honest can live is we can sustain conversations to promote better narratives for a global world.

Memes That Kill: The Future Of Information Warfare

Memes and social networks have become weaponized, while many governments seem ill-equipped to understand the new reality of information warfare. How will we fight state-sponsored disinformation and propaganda in the future?
In 2011, a university professor with a background in robotics presented an idea that seemed radical at the time.

After conducting research backed by DARPA — the same defense agency that helped spawn the internet — Dr. Robert Finkelstein proposed the creation of a brand new arm of the US military, a “Meme Control Center.”

In internet-speak the word “meme” often refers to an amusing picture that goes viral on social media. More broadly, however, a meme is any idea that spreads, whether that idea is true or false.

It is this broader definition of meme that Finklestein had in mind when he proposed the Meme Control Center and his idea of “memetic warfare.”

The presentation by Finklestein can be found Here


And here’s a more recent exploration from RAND into the challenge if the accelerating ‘infoban’ and the need for rapid ‘response-ability’.
"The single biggest change that I experienced in almost 25 years . . . was in the area of speed," said Blinken, a RAND adjunct researcher who shared his expertise on the project. "Nothing had a more profound effect on government and the challenges of government."

Can Humans Survive a Faster Future?

Life is moving faster and faster. Just about everything—transportation, weapons, the flow of information—is accelerating. How will decisionmakers preserve our personal and national security in the face of hyperspeed?
As the velocity of information—and just about everything else—accelerates, leaders face immense pressure to act or respond quickly. To help them adapt, researchers at the RAND Corporation are studying the phenomenon of speed as part of a special project, known as Security 2040, which looks over the horizon to evaluate future threats.

In his former roles as Deputy Secretary of State and Deputy National Security Adviser, Antony Blinken was one of those leaders responding to speed-driven crises.


This is a great signal of the emergence of the ‘Smart Nation’ - well worth the view. The interactive website provides lots of information and examples.

we have built a digital society and so can you

Named ‘the most advanced digital society in the world’ by Wired, ingenious Estonians are pathfinders, who have built an efficient, secure and transparent ecosystem that saves time and money. e-Estonia invites you to follow the digital journey.

Ambitious Future
Successful countries need to be ready to experiment. Building e-Estonia as one of the most advanced e-societies in the world has involved continuous experimentation and learning from our mistakes. Estonia sees the natural next step in the evolution of the e-state as moving basic services into a fully digital mode. This means that things can be done for citizens automatically and in that sense invisibly.

In order to remain an innovative, effective and successful Northern country that leads by example, we need to continue executing our vision of becoming a safe e-state with automatic e-services available 24/7.


The change in conditions of change also involve a massive phase transition in population demographics - the unprecedented reversal of the classic age pyramid and the increases of life expectancy and age inflation. One consequence is…
"This means the arc of our lives must be re-examined. Future jobs will be filled by healthy, vibrant people over 75, perhaps in non-profit work (such as my 78-year-old father) or just helping out with the family. Women will be able to have children into their 40s with new technologies, allowing them to postpone starting a family."

The next great workplace challenge: 100-year careers

Scientists expect people to live routinely to 100 in the coming decades, and as long as 150. Which also suggests a much longer working life lasting well into the 70s, 80s, and even 100, according to researchers with Pearson and Oxford University.

Quick take: Thinkers of various types are absorbed in navigating the age of automation and flat wages, but their challenge will be complicated by something few have considered — a much-extended bulge of older workers.

That includes an even harder time balancing new blood and experience, and sussing out the best basic education for lives probably traversing numerous professions. "How will we ever prepare someone in 16 years for a 100-year career?" Pearson's Amar Kumar tells Axios.

In researching the future of work, the Pearson-Oxford team began with a question — if a child were starting school today, what skills would he or she ideally learn in order to be ready for a possibly century-long career (the list they came up with is below)?


This is a summary of an OECD study (the one I wanted to post was behind the Economist paywall) - given the 100+ year life - how many careers will be part of such a life? This is a more complex and challenging question than ‘how many jobs will a person have?’

Study finds nearly half of jobs are vulnerable to automation

Job-grabbing robots are no longer science fiction. In 2013 Carl Benedikt Frey and Michael Osborne of Oxford University used—what else?—a machine-learning algorithm to assess how easily 702 different kinds of job in America could be automated. They concluded that fully 47% could be done by machines “over the next decade or two”.

A new working paper by the OECD, a club of mostly rich countries, employs a similar approach, looking at other developed economies.Its technique differs from Mr Frey and Mr Osborne’s study by assessing the automatability of each task within a given job, based on a survey of skills in 2015. Overall, the study finds that 14% of jobs across 32 countries are highly vulnerable, defined as having at least a 70% chance of automation. A further 32% were slightly less imperilled, with a probability between 50% and 70%. At current employment rates, that puts 210m jobs at risk across the 32 countries in the study.

The pain will not be shared evenly. The study finds large variation across countries: jobs in Slovakia are twice as vulnerable as those in Norway. In general, workers in rich countries appear less at risk than those in middle-income ones. But wide gaps exist even between countries of similar wealth.


There are many signals emerging that are related to the development of a new economic paradigm - one that is suited to the massive and still emerging collaborative commons and managing viability of the homeostasis of our environmental conditions - this is a good summary of one contribution related to ‘Donut Economics’ The actual talk given is not yet available - the graphic in this article provide good insight into these ideas.
“The goal of the 21st century economy should be to meet the needs of all within the means of the planet.” In today’s world we are addicted to growth. That’s why Kate Raworth, renegade economist and author of Financial Times and Forbes book of the year ‘Doughnut Economics’, proposes a new 21st century economy fit for our future.

An economist gave the most compelling design talk at TED—about doughnuts

There were plenty of great design talks at the TED conference this year. Over the five-day ideas conference held in Vancouver last week, revered architects, urbanists, engineers, and a winsome illustrator took turns regaling the audience about the power of design to create a beautiful, more humane future.

But it was an economist who arguably gave the most compelling and consequential design talk of all.

In a 15-minute lecture, Oxford University researcher Kate Raworthtraced today’s economic ills to one obsolete graphic: the growth chart. Used by every government and corporation as the single metric for progress since the 1960’s, the growth chart instills the fantasy of unending growth without regard for the finite amount of resources available. Specifically, every government thinks that the solution to all problems lies in more and more GDP growth, she said.

Here is her 2014 TED Talk

Why it's time for 'Doughnut Economics' | Kate Raworth | TEDxAthens

Economic theory is centuries out of date and that's a disaster for tackling the 21st century's challenges of climate change, poverty, and extreme inequality. Kate Raworth flips economic thinking on its head to give a crash course in alternative economics, explaining in three minutes what they'll never teach you in three years of a degree. Find out why it's time to get into the doughnut...

And here is her most recent 20 min video presentation

Kate Raworth, Doughnut Economics | Fixing the future, CCCB, Barcelona 2018

The bad news: the world is broken. The good? We can fix it. And now for the ugly: it’s going to get messy. Luckily there are plenty of people who are happy to get stuck in. Having now mapped over 500 planet-changing projects, Atlas of the Future sees our role as providing a window to the work of these innovators. On 13 March 2018 a future-‘supergroup’ gathered at the CCCB in Barcelona, ‘City of the Possible’, for our first event: ‘Fixing the future: adventures in a better tomorrow’.


The sound examples in this blog post are a MUST HEAR - for anyone who wants to get a sense of how our personal AI-ssistant will help us in the very near future - this is STUNNING. This anticipates how we interact with our phones and ‘Google home’ in the very near future.
While sounding natural, these and other examples are conversations between a fully automatic computer system and real businesses.

The Google Duplex technology is built to sound natural, to make the conversation experience comfortable. It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.

Exclusive: Google's Duplex could make Assistant the most lifelike AI yet

Experimental technology, rolling out soon in a limited release, makes you think you’re talking to a real person.
This could be the next evolution of the Assistant, Google's rival to Amazon's Alexa, Apple's Siri and Microsoft's Cortana. It sounds remarkably -- maybe even eerily -- human, pausing before responding to questions and using verbal ticks, like "um" and "uh." It says "mm hmm" as if it's nodding in agreement. It elongates certain words as though it's buying time to think of an answer, even though its responses are instantaneously programmed by algorithms.

Built with technology Google calls "Duplex" -- and developed by engineers and product designers in Tel Aviv, New York and Mountain View -- the AI sounds as though the future of voice assistants has arrived.
Well, almost arrived.

The original Google post is here with other stunning examples

Google Duplex: An AI System for Accomplishing Real World Tasks Over the Phone




This is another signal in the blending of basic science, the digital environment and the emerging new economy - new paradigms, new accounting systems, new analytics - to enable new ways to value the flows of our values.
“Having an actual physical model and showing that this is a naturally occurring process might open up new ways to think about those functions,”

Stanford physicist finds that swirling liquids work similarly to bitcoin

The physics involved with stirring a liquid operate the same way as the mathematical functions that secure digital information. This parallel could help in developing even more secure ways of protecting digital information.
Fluid dynamics is not something that typically comes to mind when thinking about bitcoin. But for one Stanford physicist, the connection is as simple as stirring your coffee.

In a study published April 23 in Proceedings of the National Academy of Sciences, Stanford applied physics doctoral student William Gilpin described how swirling liquids, such as coffee, follow the same principles as transactions with cryptocurrencies such as bitcoin. This parallel between the mathematical functions governing cryptocurrencies and natural, physical processes may help in developing more advanced digital security and in understanding physical processes in nature.


While most people accept we live in time of accelerating change and some even grasp that we are living in a change in conditions of change. The paradox we live in every day is that some of our assumptions haven’t caught up with the Newtonian world (Look, the sun is going down). Many know of Einstein yet few can claim to truly grasp the notion of ‘space-time-curvature’ because we continue to breath the idea of Newtonian gravity. Even harder to grasp is the evaporation of an ‘objective frame of reference’ that is the consequence of Einstein’s Theory of Relativity. But now we are enacting sciences of Quantum Theory that is on the verge of becoming part of our everyday technology (which derives from Techne = knowledge as ‘know-how’).
While the article is a lightweight and accessible account of this rapidly developing frontier - it’s still mind-boggling.
Entanglement - may be an emerging metaphor for connectedness and complexity.
entanglement can also help link and combine the computing power of systems in different parts of the globe. It is easy to see how that makes it a crucial aspect of quantum computation. Another promising avenue is truly secure communications. That’s because any attempt to interfere with systems involving entangled particles immediately disrupts the entanglement, making it obvious that a message has been tampered with.

Scientists Discover How to Harness the Power of Quantum Spookiness by Entangling Clouds of Atoms

From tunneling through impenetrable barriers to being in two places at the same time, the quantum world of atoms and particles is famously bizarre. Yet the strange properties of quantum mechanics are not mathematical quirks—they are real effects that have been seen in laboratories over and over.

One of the most iconic features of quantum mechanics is “entanglement”—describing particles that are mysteriously linked regardless of how far away from each other they are. Now three independent European research groups have managed to entangle not just a pair of particles, but separated clouds of thousands of atoms. They’ve also found a way to harness their technological potential.

When particles are entangled they share properties in a way that makes them dependent on each other, even when they are separated by large distances. Einstein famously called entanglement “spooky action at a distance,” as altering one particle in an entangled pair affects its twin instantaneously—no matter how far away it is.


This is a strong signal of the acceleration of the emerging domestication of DNA.

The Genome Project-write (GP-write)

The Genome Project-write (GP-write) is an open, international research project led by a multi-disciplinary group of scientific leaders who will oversee a reduction in the costs of engineering and testing large genomes in cell lines more than 1,000-fold within ten years.

GP-write will include whole genome engineering of human cell lines and other organisms of agricultural and public health significance. Thus, the Human Genome Project-write (HGP-write) will be a critical core activity within GP-write focused on synthesizing human genomes in whole or in part. It will also be explicitly limited to work in cells, and organoids derived from them only. Because of the special challenges surrounding human genomes, this activity will include an expanded examination of the ethical, legal and social implications of the project.

The overarching goal of such an effort is to understand the blueprint for life provided by the Human Genome Project (HGP-read).

HGP-read aimed to “read” a human genome. Successfully completed in 2003, HGP-read is now widely recognized as one of the great feats of exploration, one that sparked a global revolution in science and medicine, particularly in genomic-based diagnostics and therapeutics.

But our understanding of the human genome – and the full benefits to humanity to be obtained from this knowledge — remains far from complete. Many scientists now believe that to truly understand our genetic blueprint, it is necessary to “write” DNA and build human (and other) genomes from scratch. Such an endeavor will require research and development on a grand scale.


Well this is a very interesting signal in the continued domestication of DNA - still discovering some fundamentals.

Scientists discover new DNA structure that's not a double helix

In a paper published in Nature Chemistry, researchers from Australia describe the first-ever sighting of a DNA component—called the intercalated motif (i-motif)—within living human cells. The shape of the structure has been likened to a “twisted knot.”

“The i-motif is a four-stranded ‘knot’ of DNA,” said genomicist Marcel Dinger, who co-led the research. “In the knot structure, C [cytosine] letters on the same strand of DNA bind to each other—so this is very different from a double helix, where ‘letters’ on opposite strands recognize each other, and where Cs bind to Gs [guanines].”

To identify the i-motif, which had been previously identified in vitro but never in living cells, the researchers developed antibody fragments dubbed “iMabs” that could recognize and bind to i-motifs in cells. The researchers added fluorescent dyes to the iMabs to make them easy to spot.


This is another signal related to a deeper understanding of communication in the domains of bacteria.

Study reveals how bacteria communicate in groups to avoid antibiotics

In a new study published in the Journal of Biological Chemistry (JBC), researchers from the University of Notre Dame and the University of Illinois at Urbana-Champaign have found that the bacterium Pseudomonas aeruginosa, a pathogen that causes pneumonia, sepsis and other infections, communicates distress signals within a group of bacteria in response to certain antibiotics. This communication was found to vary across the colony and suggests that this bacterium may develop protective behaviors that contribute to its ability to tolerate some antibiotics.

"There is a general lack of understanding about how communities of bacteria, like the opportunistic pathogen P. aeruginosa, respond to antibiotics," said Nydia Morales-Soto, senior research scientist in civil and environmental engineering and earth sciences (CEEES) at the University of Notre Dame and lead author of the paper. "Most of what we know is from studies about stationary biofilm communities, whereas less is known about the process beforehand when bacteria are colonizing, spreading and growing. In this study, our research team specifically reviewed the behavior of bacteria during this period and what that may mean for antibiotic resistance."


Research continues to discover that biological communications at all levels is increasingly complex. From horizontal gene transfer to exchanges of all forms of biological components and signals. The mechanisms of individual-species-environment evolution involve more that we have imagined. This is worth the read.
“There are fundamental differences between viruses and vesicles: Viruses can replicate and vesicles cannot,” Margolis said. “But there are many variants in between. Where do viruses start, and where do extracellular vesicles start?”
“Cell-cell communication is one of the most ancient mechanisms that makes us who we are, Since vesicles resemble viruses, the question of course is whether the first extracellular vesicles were primitive viruses and the viruses learned from extracellular vesicles or vice versa.”
Around 8 percent of the human genome is ultimately derived from viruses. Although some of this DNA is, in fact, “junk,” scientists are learning that much of it plays a role in our biology
“Although these viruses aren’t good for individuals, they provide the raw materials for new genes, They’re a potential gold mine.”

it’s now clear that extracellular vesicles are far from simple cellular debris, and the viral genes littering our DNA aren’t exactly junk, researchers have only just begun to crack the mystery of what they can do

Cells Talk in a Language That Looks Like Viruses

Live viruses may seem completely different from the message-carrying vesicles that cells release. But a vast population of particles intermediate between the two hints at their deep evolutionary connection.
Is it a live virus? An extracellular vesicle that delivers information about a cell? An incomplete and defective virus particle? A vesicle carrying viral components? Classifying the closely related particles that cells release can be a challenge.

For cells, communication is a matter of life and death. The ability to tell other members of your species — or other parts of the body — that food supplies are running low or that an invading pathogen is near can be the difference between survival and extinction. Scientists have known for decades that cells can secrete chemicals into their surroundings, releasing a free-floating message for all to read. More recently, however, scientists discovered that cells could package their molecular information in what are known as extracellular vesicles. Like notes passed by children in class, the information packaged in an extracellular vesicle is folded and delivered to the recipient.

The past five years have seen an explosion of research into extracellular vesicles. As scientists uncovered the secrets about how the vesicles are made, how they package their information and how they’re released, it became clear that there are powerful similarities between vesicles and viruses.

….this similarity is more than mere coincidence. It’s not just that viruses appear to hijack the cellular pathways used to make extracellular vesicles for their own production — or that cells have also taken on some viral components to use in their vesicles. Extracellular vesicles and viruses, Margolis argues, are part of a continuum of membranous particles produced by cells. Between these two extremes are lipid-lined sacs filled with a variety of genetic material and proteins — some from hosts, some from viruses — that cells can use to send messages to one another.


This is a very short article presenting about 7 graphs - well worth the view to see just how fasts the energy mix is changing in the last 5 years in the UK.
For example, in 2012 coal was 43.2% of electricity generation - in 2017 it was down to 7%.

Electricity since 2012

On this page I chart how electricity is changing year on year. This is done using a series of charts with commentary provided through my blogs. All of the charts are automatically recalculated on a monthly basis.

A blog piece on the data as it stood at the beginning of April 2017 can be found here.


And another important signal of the change in global energy geopolitics and transportation.

Electric Vehicles Begin To Bite Into Oil Demand

The latest report from Bloomberg New Energy shows that economics are driving the change, with the total cost of ownership of electric buses far outperforming the alternatives. The report says a 110kWh battery e-bus coupled with the most expensive wireless charging reaches parity with a diesel bus on total cost of ownership at around 60,000 km traveled per year (37,000 miles). This means that a bus with the smallest battery, even when coupled with the most expensive charging option, would be cheaper to run in a medium-sized city, where buses travel on average 170km/day (106 miles).

Today large cities with high annual bus mileages therefore choose from a number of electric options, all cheaper than diesel and CNG buses. The BNEF report says, ‘Even the most expensive electric bus at 80,000km per year has a TCO of $0.92/km, just at par with diesel buses. Compared to a CNG bus, it is around $0.11/km cheaper in terms of the TCO. This indicates that in a megacity, where buses travel at least 220km/day, using even the most expensive 350kWh e-bus instead of a CNG bus could bring around $130,000 in operational cost savings over the 15-year lifetime of a bus.

For every 1,000 battery-powered buses on the road, about 500 barrels a day of diesel fuel will be displaced from the market, according to BNEF calculations. In 2018, the volume of oil-based fuel demand that buses remove from the market may rise 37 % to 279,000 barrels a day, or approximately the equivalent of the oil consumption of Greece. By 2040, this number could rise as high as 8 million barrels per day (bpd).


This is an interesting signal for a couple of reasons. Of course the obvious one stated in the title - invention of a new device helping the progress of computational optics. Another key reason is the role played by interdisciplinary scientist in bridging different domains. The 9 min video by the key author doesn’t really explain the invention of the particular device - but illuminates the authors ability to bring different domains together.

Plasmonic modulator could lead to a new breed of electro-optic computer chips

Researchers have created a miniaturized device that can transform electronic signals into optical signals with low signal loss. They say the electro-optic modulator could make it easier to merge electronic and photonic circuitry on a single chip. The hybrid technology behind the modulator, known as plasmonics, promises to rev up data processing speeds. “As with earlier advances in information technology, this can dramatically impact the way we live,” Larry Dalton, a chemistry professor emeritus at the University of Washington, said in a news release. Dalton is part of the team that reported the advance today in the journal Nature.


New forms of computation? Maybe. This next article signals another shift in domesticating DNA using CRISPR for fast inexpensive diagnosis. The 3 min video provides an excellent accessible explanation.

Mammoth Biosciences launches a CRISPR-powered search engine for disease detection

Most people tend to think of CRISPR as a groundbreaking gene-editing technology that can hunt down and snip away bits of DNA, like the cut and paste function on a keyboard. While many research projects tend to emphasize the potential of that process in replacing target bits of genetic material, for Mammoth Biosciences, the search function is the real game changer.

“Control + F is the exciting part,” Mammoth co-founder and CEO Trevor Martin told TechCrunch in an interview. “At core it’s just this amazing search engine that we can use to find things. The way that we search for things is just like Google.”


This is an interesting short piece with a 3 min video and 36 awesome pictures of Mars and interesting links to other related articles - and a question of the speed of evolution even without domesticating DNA.

Chernobyl's Mutated Species May Help Protect Astronauts

Some species in the radioactive site show resistances to radiation—and their genetic protections may one day be applied to humans.
A former power plant in what is today northern Ukraine, Chernobyl experienced a catastrophic nuclear reactor accident in April 1986, and it is still contaminated with same kind of gamma radiation that astronauts will encounter in deep space. Creatures big and small, from wolves to microbes, continue to live inside the thousand-square-mile Exclusion Zone.

Mousseau has visited the site regularly since 2000, looking at hundreds of species to see how they react to the environment. Some, like the radiantly red firebug, mutate aesthetically, their normally symmetrical designs warped and fractured. Others, including certain species of birds and bacteria, have shown an increased tolerance and resistance to the radiation.

These differences may offer clues to help with human spaceflight.
“I think that within human genomes, there are secrets to biological mechanisms for resisting or tolerating the effects of radiation,” Mousseau says. “The trick is to figure out what those mechanisms are, and to maybe turn them on or enhance them in some way.”


This is cool - although there was an effort to develop similar prototypes for car tires about 10 years ago - but they weren’t 3D printed. Maybe to a bike store near you soon. The video is 1 min.

Company Introduces First 3-D Printed Flexible, Airless Bicycle Tires

This is a video of industrial 3D printer manufacturer BigRep creating the the world's first 3-D printed airless bicycle tires with their new PRO FLEX TPU (thermoplastic polyurethane) based flexible filament. They do seem to hold a bike up, although I doubt they get any grip on the pavement and probably drift like plastic Big Wheels tires when you try to stop. Still, add an outer rubber tread and you might be onto something. Now -- I want you to get into something. "What are you saying?" I want you to ride in my bicycle basket. "Um, don't you remember what happened the last time somebody rode in your bicycle basket?" Of course, Toto and I went to Oz and had the time of our lives. "No, the LAST last time." Oh shit -- E.T.! I almost forgot about that little creeper. You know he was just phoning sex-chat lines, right?

Thursday, May 3, 2018

Friday Thinking 4 May 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content
Quotes:

Articles:



Stanislaw Lem thus concludes that if our technological civilization is to avoid falling into decay, human obsolescence in one form or another is unavoidable. The sole remaining option for continued progress would then be the “automatization of cognitive processes” through development of algorithmic “information farms” and superhuman artificial intelligences. This would occur via a sophisticated plagiarism, the virtual simulation of the mindless, brute-force natural selection we see acting in biological evolution, which, Lem dryly notes, is the only technique known in the universe to construct philosophers, rather than mere philosophies.

The result is a disconcerting paradox, which Lem expresses early in the book: To maintain control of our own fate, we must yield our
agency to minds exponentially more powerful than our own, created through processes we cannot entirely understand, and hence potentially unknowable to us. This is the basis for Lem’s explorations of The Singularity, and in describing its consequences he reaches many conclusions that most of its present-day acolytes would share. But there is a difference between the typical modern approach and Lem’s, not in degree, but in kind.

Unlike the commodified futurism now so common in the bubble-worlds of Silicon Valley billionaires, Lem’s forecasts weren’t really about seeking personal enrichment from market fluctuations, shiny new gadgets, or simplistic ideologies of “disruptive innovation.” In Summa Technologiae and much of his subsequent work, Lem instead sought to map out the plausible answers to questions that today are too often passed over in silence, perhaps because they fail to neatly fit into any TED Talk or startup business plan: Does technology control humanity, or does humanity control technology? Where are the absolute limits for our knowledge and our achievement, and will these boundaries be formed by the fundamental laws of nature or by the inherent limitations of our psyche? If given the ability to satisfy nearly any material desire, what is it that we actually would want?

To Lem (and, to their credit, a sizeable number of modern thinkers), the Singularity is less an opportunity than a question mark, a multidimensional crucible in which humanity’s future will be forged.

“I feel that you are entering an age of metamorphosis; that you will decide to cast aside your entire history, your entire heritage and all that remains of natural humanity—whose image, magnified into beautiful tragedy, is the focus of the mirrors of your beliefs; that you will advance (for there is no other way), and in this, which for you is now only a leap into the abyss, you will find a challenge, if not a beauty; and that you will proceed in your own way after all, since in casting off man, man will save himself.”

The Book No One Read



Urbanisation might be the most profound change to human society in a century, more telling than colour, class or continent
At some unknown moment between 2010 and 2015, for the first time in human history, more than half the world’s population lived in cities. Urbanisation is unlikely to reverse. Every week since, another 3 million country dwellers have become urbanites. Rarely in history has a small number of metropolises bundled as much economic, political and cultural power over such vast swathes of hinterlands. In some respects, these global metropolises and their residents resemble one another more than they do their fellow nationals in small towns and rural area. Whatever is new in our global age is likely to be found in cities.

For centuries, philosophers and sociologists, from Jean-Jacques Rousseau to Georg Simmel, have alerted us to how profoundly cities have formed our societies, minds and sensibilities. The widening political polarisation between big cities and rural areas, in the United States as well as Europe, has driven home the point of quite how much the relationship between cities and the provinces, the metropolis and the country, shapes the political lives of societies. The history of cities is an extraordinary guide to understanding today’s world. Yet, compared with historians at large, as well as more present-minded scholars of urban studies, urban historians have not featured prominently in public conversation as of late.

A metropolitan world




Late on the night of October 4, 1957, Communist Party Secretary Nikita Khrushchev was at a reception at the Mariinsky Palace, in Kiev, Ukraine, when an aide called him to the telephone. The Soviet leader was gone a few minutes. When he reappeared at the reception, his son Sergei later recalled, Khrushchev’s face shone with triumph. “I can tell you some very pleasant and important news,” he told the assembled bureaucrats. “A little while ago, an artificial satellite of the Earth was launched.” From its remote Kazakh launchpad, Sputnik 1 had lifted into the night sky, blasting the Soviet Union into a decisive lead in the Cold War space race.
News of the launch spread quickly. In the US, awestruck citizens wandered out into their backyards to catch a glimpse of the mysterious orb soaring high above them in the cosmos. Soon the public mood shifted to anger – then fear. Not since Pearl Harbour had their mighty nation experienced defeat. If the Soviets could win the space race, what might they do next?

Keen to avert a crisis, President Eisenhower downplayed Sputnik’s significance. But, behind the scenes, he leapt into action. By mid-1958 Eisenhower announced the launch of a National Aeronautics and Space Administration (better known today as Nasa), along with a National Defense and Education Act to improve science and technology education in US schools. Eisenhower recognised that the battle for the future no longer depended on territorial dominance. Instead, victory would be achieved by pushing at the frontiers of the human mind.

Sixty years later, Chinese President Xi Jinping experienced his own Sputnik moment. This time it wasn’t caused by a rocket lifting off into the stratosphere, but a game of Go – won by an AI. For Xi, the defeat of the Korean Lee Sedol by DeepMind’s Alpha Go made it clear that artificial intelligence would define the 21st century as the space race had defined the 20th.

The event carried an extra symbolism for the Chinese leader. Go, an ancient Chinese game, had been mastered by an AI belonging to an Anglo-American company. As a recent Oxford University report confirmed, despite China’s many technological advances, in this new cyberspace race, the West had the lead.

China’s children are its secret weapon in the global AI arms race




The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.

The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.

Against metrics: how measuring performance by numbers backfires




The power and potential of computation to tackle important problems has never been greater. In the last few years, the cost of computation has continued to plummet. The Pentium IIs we used in the first year of Google performed about 100 million floating point operations per second. The GPUs we use today perform about 20 trillion such operations — a factor of about 200,000 difference — and our very own TPUs are now capable of 180 trillion (180,000,000,000,000) floating point operations per second.

Even these startling gains may look small if the promise of quantum computing comes to fruition. For a specialized class of problems, quantum computers can solve them exponentially faster. For instance, if we are successful with our 72 qubit prototype, it would take millions of conventional computers to be able to emulate it. A 333 qubit error-corrected quantum computer would live up to our name, offering a 10,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000x speedup.

There are several factors at play in this boom of computing. First, of course, is the steady hum of Moore’s Law, although some of the traditional measures such as transistor counts, density, and clock frequencies have slowed. The second factor is greater demand, stemming from advanced graphics in gaming and, surprisingly, from the GPU-friendly proof-of-work algorithms found in some of today’s leading cryptocurrencies, such as Ethereum. However, the third and most important factor is the profound revolution in machine learning that has been building over the past decade. It is both made possible by these increasingly powerful processors and is also the major impetus for developing them further.

The new spring in artificial intelligence is the most significant development in computing in my lifetime. When we started the company, neural networks were a forgotten footnote in computer science; a remnant of the AI winter of the 1980’s. Yet today, this broad brush of technology has found an astounding number of applications.

Every month, there are stunning new applications and transformative new techniques. In this sense, we are truly in a technology renaissance, an exciting time where we can see applications across nearly every segment of modern society.

However, such powerful tools also bring with them new questions and responsibilities. How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?

Sergey Brin - Alphabet - 2017 Founders’ Letter




Added to this, there is technological anxiety, too – what is it to be a man when there are so many machines? Thus, Dilov invents a Fourth Law of Robotics, to supplement Asimov’s famous three, which states that ‘the robot must, in all circumstances, legitimate itself as a robot’. This was a reaction by science to the roboticists’ wish to give their creations ever more human qualities and appearance, making them subordinate to their function – often copying animal or insect forms.

Finally, it was Kesarovski’s time. He was a populariser of science, often writing computer guides for children, as well as essays that lauded information technology as a solution to future problems. This was reflected in his short stories, three of which were published in the collection The Fifth Law of Robotics (1983). In the first, he explored a vision of the human body as a cybernetic machine. A scientist looking for proof of alien consciousness finds it – in his own blood cells. Deciphering messages sent by an alien mind, trying to decode what their descriptions of society actually mean, he gradually comes to understand his own body as a sort of robot.

Kesarovski’s vision of nesting cybernetic machines – turtles all the way down or up – indicates his own training as one of the regime’s specialists: he was a more optimistic writer than Dilov.

In Kesarovski’s telling, the Fifth Law [or Robotics] states that ‘a robot must know it is a robot’. As the novella progresses, we face a cyborg that melds the best machine and human mind together…  For Kesarovski, computers and robots held dangers, but also a promise, if humanity could one day see that it was both a type of robot itself, and in a position only to gain from the machines’ powers, allowing it to attain the next step in its historical progress.

Communist robot dreams





This is a nice summary of the history of AI to this point by Rodney Brookes.
The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.

The Origins of “Artificial Intelligence”

THE EARLY DAYS
It is generally agreed that John McCarthy coined the phrase “artificial intelligence” in the written proposal for a 1956 Dartmouth workshop, dated August 31st, 1955. It is authored by, in listed order, John McCarthy of Dartmouth, Marvin Minsky of Harvard, Nathaniel Rochester of IBM and Claude Shannon of Bell Laboratories. Later all but Rochester would serve on the faculty at MIT, although by early in the sixties McCarthy had left to join Stanford University. The nineteen page proposal has a title page and an introductory six pages (1 through 5a), followed by individually authored sections on proposed research by the four authors. It is presumed that McCarthy wrote those first six pages which include a budget to be provided by the Rockefeller Foundation to cover 10 researchers.

The title page says A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE. The first paragraph includes a sentence referencing “intelligence”:
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

And then the first sentence of the second paragraph starts out:
The following are some aspects of the artificial intelligence problem:

That’s it! No description of what human intelligence is, no argument about whether or not machines can do it (i.e., “do intelligence”), and no fanfare on the introduction of the term “artificial intelligence” (all lower case).


This is a great summary from Nature, of the current state regarding memristors and their potential for a new computational paradigm.

The future of electronics based on memristive systems

Abstract
A memristor is a resistive device with an inherent memory. The theoretical concept of a memristor was connected to physically measured devices in 2008 and since then there has been rapid progress in the development of such devices, leading to a series of recent demonstrations of memristor-based neuromorphic hardware systems. Here, we evaluate the state of the art in memristor-based electronics and explore where the future of the field lies. We highlight three areas of potential technological impact: on-chip memory and storage, biologically inspired computing and general-purpose in-memory computing. We analyse the challenges, and possible solutions, associated with scaling the systems up for practical applications, and consider the benefits of scaling the devices down in terms of geometry and also in terms of obtaining fundamental control of the atomic-level dynamics. Finally, we discuss the ways we believe biology will continue to provide guiding principles for device innovation and system optimization in the field.


This is another signal of the emergence of a new scientific paradigm into everyday reality.

Spooky quantum entanglement goes big in new experiments

Two teams entangled the motions of two types of small, jiggling devices
Quantum entanglement has left the realm of the utterly minuscule, and crossed over to the just plain small. Two teams of researchers report that they have generated ethereal quantum linkages, or entanglement, between pairs of jiggling objects visible with a magnifying glass or even the naked eye — if you have keen vision.

Physicist Mika Sillanpää and colleagues entangled the motion of two vibrating aluminum sheets, each 15 micrometers in diameter — a few times the thickness of spider silk. And physicist Sungkun Hong and colleagues performed a similar feat with 15-micrometer-long beams made of silicon, which expand and contract in width in a section of the beam. Both teams report their results in the April 26 Nature.

“It’s a first demonstration of entanglement over these artificial mechanical systems,” says Hong, of the University of Vienna. Previously, scientists had entangled vibrations in two diamonds that were macroscopic, meaning they were visible (or nearly visible) to the naked eye. But this is the first time entanglement has been seen in macroscopic structures constructed by humans, which can be designed to meet particular technological requirements.


It’s taking longer than many anticipated to execute powerful augmented reality - here’s one very good signal.
“There are lots of applications for this technology, including in teaching, physiotherapy, laparoscopic surgery and even surgical planning,” said Watts, who developed the technology with fellow graduate student Michael Fiest.

Augmented reality system lets doctors see under patients’ skin without the scalpel

New technology lets clinicians see patients’ internal anatomy displayed right on the body.
New technology is bringing the power of augmented reality into clinical practice.
The system, called ProjectDR, allows medical images such as CT scans and MRI data to be displayed directly on a patient’s body in a way that moves as the patient does.

“We wanted to create a system that would show clinicians a patient’s internal anatomy within the context of the body,” explained Ian Watts, a computing science graduate student and the developer of ProjectDR.

The technology includes a motion-tracking system using infrared cameras and markers on the patient’s body, as well as a projector to display the images. But the really difficult part, Watts explained, is having the image track properly on the patient’s body even as they shift and move. The solution: custom software written by Watts that gets all of the components working together.


This is a must see gif - a 2 sec video.

Girl lost both her hands as a baby. Here she is testing the precision and dexterity of her new 3D-printed bionics




This is an important signal to watch - right now it’s definitely a weak signal - but in the coming decades it may displace many current professional sports domains. The key is not just the looming ‘virtual reality’ dimension - it is the blending of very wide variety of participation by pros and fans, spectatorship and production of a vast diversity of gaming genres and actual games.

NINJA’S FORTNITE TOURNAMENT WAS AN EXHILARATING AND UNPRECEDENTED E-SPORTS EXPERIMENT

The future of e-sports may be in hybrid entertainment that puts fans, pros, and streamers together in the same server
As I found myself seated at a gaming PC on the floor of the Luxor hotel’s exuberant Esports Arena, preparing to play Fortnite among the best players in the world, I can say my confidence levels were not very high. I had never played video games competitively before, though not for lack of trying. I consider myself an above average player of most shooters, from the early days of Halo and Call of Duty to now Destiny and Overwatch. Yet the feeling of playing under this kind of pressure and against players of this caliber was alien to me.

But I went to Las Vegas last weekend to see what the first big Fortnite e-sports tournament was going to look like, and specifically how it would feel to participate in it. Unlike most e-sports competitions, this one let members of the public compete, and it all centered on the chance to play against Tyler “Ninja” Blevins, the most popular streamer on Twitch and one of the world’s most talented Fortnite players. Just a few minutes into my first match on Saturday evening, one of nine consecutive games Ninja would participate in, I found myself under fire. Seconds later, an opponent descended on me and took me out with a shotgun blast. I never stood a chance.

I looked up at the big screen behind me and off to the left, an enormous monitor featuring Ninja’s perspective spanning the entire back wall of the arena. Below it, Ninja was playing on his own custom machine located center stage. I wanted to see whether it was the Twitch star that had taken me out. Thankfully, it wasn’t; my poor performance wasn’t broadcasted to hundreds of thousands of people watching online. But in a way, it would have been an honor to say I got to personally face off against one of the best, even if I inevitably lost. And that’s precisely what made the event, officially called Ninja Vegas 18, such an unprecedented e-sports experiment.

As for Ninja’s event, it was a resounding success. Although Ninja won only one of his nine games, the level of competition from both professional players and relatively unknown competitors was wildly entertaining, creating dozens of crowd-pleasing moments and surprise victories. And viewers agreed — more than 667,000 people tuned in to Ninja’s personal Twitch stream at the tournament’s peak. It broke the platform’s all-time concurrent viewer record Ninja himself set back in March when he live streamed a Fortnite session with Drake, NFL player JuJu Smith-Schuster, and rapper Travis Scott.


This is an interesting 5 min read about the difference between the ‘evils of adtech’ versus real advertising. There are some worthwhile links in the article.

How True Advertising Can Save Journalism From Drowning in a Sea of Content

Journalism is in a world of hurt because it has been marginalized by a new business model that requires maximizing “content” instead. That model is called adtech.

We can see adtech’s effects in The New York Times’ In New Jersey, Only a Few Media Watchdogs Are Left, by David Chen. His prime example is the Newark Star-Ledger, “which almost halved its newsroom eight years ago,” and “has mutated into a digital media company requiring most reporters to reach an ever-increasing quota of page views as part of their compensation.”

That quota is to attract adtech placements.
While adtech is called advertising and looks like advertising, it’s actually a breed of direct marketing, which is a cousin of spam and descended from what we still call junk mail.


Quorum Sensing is a widespread strategy for group decisioning at many levels of life - from bacteria (e.g. slime molds), insects, and mammals - perhaps even ecosystems.

How to Sway a Baboon Despot

What other species can teach us about democracy
Early last year, more than 70 years after its publication, George Orwell’s Animal Farm appeared on The Washington Post’s best-seller list. A writer for the New York Observer declared the novel—an allegory involving a government run by pigs—a “guidepost” for politics in the age of Donald Trump. A growing body of research, however, suggests that animals may offer political lessons that are more than allegorical: Many make decisions using familiar political systems, from autocracy to democracy. How these systems function, and when they falter, may be telling for Homo sapiens.

As in human democracies, the types of votes in animal democracies vary. When deciding where to forage, for instance, Tonkean macaques line up behind their preferred leader; the one with the most followers wins. Swans considering when to take flight bob their heads until a “threshold of excitability” is met, at which point they collectively rise into the sky. Honeybee colonies needing a new home vote on where to go: Thomas Seeley, a Cornell biologist, has found that scout bees investigate the options and inform the other bees of potential sites through complex “waggle dances” that convey basic information (distance, direction, overall quality). When a majority is won over by a scout’s campaign, the colony heads for its new home.

Research also shows that animal democracies, like human ones, can go awry. For instance, Seeley found that bees sometimes chose a mediocre—even terrible—site over an objectively better option. When this happened, it was invariably because they had “satisficed”—that is, settled for a plausible choice that came in early, rather than waiting for more options. Seeley told me he once saw several bees return to a hive and perform “unenthusiastic, lethargic” dances. With no great choices, they began coalescing around the best of the middling ones. At the last minute, though, “one bee came back, and she was so excited,” Seeley said. “She danced and danced and danced. She must have found something wonderful. But it was too late.” The bees had picked their candidate; momentum carried the day.

Why do bees take a vote to begin with, though? In 2013, researchers at the Max Planck Institute for Human Development, the London School of Economics, and the University of Sussex used game theory to show that animals’ willingness to behave democratically redounds to their benefit. Compared with decisions handed down by tyrant leaders, democratic decisions are less likely to be flawed. Moreover, when animals have a chance to register their opinion, the gap between the average individual’s preferred outcome and the actual outcome tends to be smaller than it would be if the decision were made by fiat. In this way, animal democracy is stabilizing; few get their way, but most are relatively content.


This is an amazing development in our understanding of how cellular ecosystems can communicate and exchange matter between themselves - especially when under stress. This is a complementary means to horizontal gene transfer. This illustrating gif is worth the view.

Cells Talk and Help One Another via Tiny Tube Networks

How did the tunneling nanotubes go unnoticed for such a long time? Lou notes that in the last couple of decades, cancer research has centered primarily on detecting and therapeutically targeting mutations in cancer cells — and not the structures between them. “It’s right in front of our face, but if that’s not what people are focusing on, they’re going to miss it,” he said.

That’s changing now. In the last few years, the number of researchers working on TNTs and figuring out what they do has risen steeply. Research teams have discovered that TNTs transfer all kinds of cargo beyond microRNAs, including messenger RNAs, proteins, viruses and even whole organelles, such as lysosomes and mitochondria.

To understand whether or not the cells actively regulate these transfers, Haimovich challenged them with heat shock and oxidative stress. If changes in the environmental conditions changed the rate of RNA transfer, that “would suggest that this is a biologically regulated mechanism, not just diffusion of RNA by chance,” he explained. He found that oxidative stress did induce an increase in the rate of transfer, while heat shock induced a decrease. Moreover, this effect was seen if stress was inflicted on acceptor cells but not if it was also inflicted on donor cells prior to co-culture, Haimovich clarified by email. “This suggests that acceptor cells send signals to the donor cells ‘requesting’ mRNA from their neighbors,” he said. His results were reported in the Proceedings of the National Academy of Sciences last year.

“Our general hypothesis is that when a cell is in danger or is dying or is stressed, the cell tries to implement a way of communication that is normally used during development, because we believe that these TNTs are more for fast communication in a developing organism,” she said. “However, when the cell is affected by a disease or infected by a virus or prion, the cell is stressed out, and it sends these protrusions to try to get help from cells that are in good health — or to discharge the prions.”


The world of plants is full of surprises.

Trees are not as 'sound asleep' as you may think

High-precision three-dimensional surveying of 21 different species of trees has revealed a yet unknown cycle of subtle canopy movement during the night. The 'sleep cycles' differed from one species to another. Detection of anomalies in overnight movement could become a future diagnostic tool to reveal stress or disease in crops.

Overnight movement of leaves is well known for tree species belonging to the legume family, but it was only recently discovered that some other trees also lower their branches by up to 10 centimeters at night and then back in the morning. These branch movements are slow and subtle, and take place at night, which makes them difficult to identify with the naked eye. However, terrestrial laser scanning, a 3-dimensional surveying technique developed for precision mapping of buildings, makes it possible to measure the exact position of branches and leaves.


This is a very interesting project - how individuals, crowdsourced and other funding can catalyze ways to mitigate large problems.
“I would never be able to work on a photo-sharing app or ‘internet startup XYZ,'” he says. “I think people overestimate the risk of high-risk projects. Personally, I think I would find it much harder to make a photo-sharing app a success–it sounds counterintuitive, because it’s much easier from an engineering perspective, but I think if you work on something that’s truly exciting and bold and complicated, then you will attract the kind of people that are really smart and talented. People that like solving complicated problems.”

The Revolutionary Giant Ocean Cleanup Machine Is About To Set Sail

Boyan Slat dropped out of school to work on his design for a device that could collect the trillions of pieces of plastic floating in the ocean. After years of work, it’s ready to take its first voyage.
Six years ago, the technology was only an idea presented at a TEDx talk. Boyan Slat, the 18-year-old presenter, had learned that cleaning up the tiny particles of plastic in the ocean could take nearly 80,000 years. Because of the volume of plastic spread through the water, and because it is constantly moving with currents, trying to chase it with nets would be a losing proposition. Slat instead proposed using that movement as an advantage: With a barrier in the water, he argued, the swirling plastic could be collected much more quickly. Then it could be pulled out of the water and recycled.

Some scientists have been skeptical that the idea is feasible. But Slat, undeterred, dropped out of his first year of university to pursue the concept, and founded a nonprofit to create the technology, The Ocean Cleanup, in 2013. The organization raised $2.2 million in a crowdfunding campaign, and other investors, including Salesforce CEO Marc Benioff, brought in millions more to fund research and development. By the end of 2018, the nonprofit says it will bring back its first harvest of ocean plastic back from the North Pacific Gyre, along with concrete proof that the design works. The organization expects to bring 5,000 kilograms of plastic ashore per month with its first system. With a full fleet of systems deployed, it believes that it can collect half of the plastic trash in the Great Pacific Garbage Patch–around 40,000 metric tons–within five years.


I have to say - that I love this paradox of sensorial goodness and gustatorial wonder.

A Paean to PB&P

Why a peanut butter and pickle sandwich is the totally not-gross snack you need in your mouth right now.
Dwight Garner, an accomplished New York Times book critic, can count himself a member of the rarefied club of journalists whose writing has actually moved hearts and minds on a topic of great importance. In one 2012 article, he changed my life, intimately and permanently, with an ode to an object I’d never previously considered with the solemnity it deserves: the peanut butter and pickle sandwich.

When I clicked on Garner’s piece “Peanut Butter Takes On an Unlikely Best Friend” in October 2012, it was with great skepticism. I expected to be trolled with outrageous, unsupported assertions and straw-man arguments. Instead, I found myself drawn in by lip-smacking prose and a miniature history lesson. Peanut butter and pickle sandwiches were a hit at Depression-era lunch counters, I learned, and in cookbooks from the 1930s and ’40s, which recommended they be crafted with pickle relish rather than slices or spears. Garner quoted the founder of a peanut butter company, who remarked that the savory-and-sour flavor profile of the sandwich is more common in South and East Asian cuisines. This observation was my eureka moment: One of my favorite Thai dishes, papaya salad, traditionally combines raw peanuts with a lime and rice vinegar–based dressing. Perhaps the sandwich I’d only ever imagined in the context of stupid jokes about pregnancy cravings could be equally delicious.