Thursday, March 8, 2018

Friday Thinking 9 March 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9



Content

Quotes:

Articles:



So brainwashed are we by the false money meme of “money as wealth” that whenever anyone proposes needed infrastructure maintenance, better schools and healthcare or any public goods, we are intimidated by some defunct economist who says “Where’s the money coming from?” They ought to know better, since, of course money is not scarce, it’s just information as I pointed out in 2001 at the annual meeting of the Inter-American Development Bank in an invited talk “Information, The World’s Real Currency, Is Not Scarce “

Money Is Not Wealth: Cryptos v. Fiats!




This does not look like totalitarianism unless you squint very hard indeed. As the sociologist Kieran Healy has suggested, sweeping political critiques of new technology often bear a strong family resemblance to the arguments of Silicon Valley boosters. Both assume that the technology works as advertised, which is not necessarily true at all.

Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).

In other words, we live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s. Dick was no better a prophet of technology than any science fiction writer, and was arguably worse than most. His imagined worlds jam together odd bits of fifties’ and sixties’ California with rocket ships, drugs, and social speculation. Dick usually wrote in a hurry and for money, and sometimes under the influence of drugs or a recent and urgent personal religious revelation.

Still, what he captured with genius was the ontological unease of a world in which the human and the abhuman, the real and the fake, blur together.

In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust.

Philip K. Dick and the Fake Humans




I went to bed believing that I was more or less in control — that the unfinished business, unrealized dreams and other disappointments in my life were essentially failures of industry and imagination, and could probably be redeemed with a fierce enough effort. I woke up to the realization of how ludicrous that was.

Am I Going Blind?




“The hard part of standing on an exponential curve is: when you look backwards, it looks flat, and when you look forward, it looks vertical,” he told me. “And it’s very hard to calibrate how much you are moving because it always looks the same.”

ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP  A.I. APOCALYPSE




Art and craftsmanship may suggest a way of life that waned with the birth of industrial society, but this is misleading. The future of work may resemble the history of work, and this is because of our newest, most advanced technologies.

The corporate system is transforming into a maze of fragmented tasks and short-term gigs. Although the modern era is often described as a skills economy, most companies have a short-term focus, which means for a worker that when her experience accumulates, it often loses institutional value.

Computer-based digital manufacturing does not work this way. It does not use moulds or casts. Without these, there is no need to repeat the same form. Every piece can be unique, a work of art. As Mario Carpo puts it: “Repetition no longer saves money and variations no longer cost more money.” This means that the marginal cost of production is always the same.

The biggest challenge for a worker in this new environment is to think like an artist, at the same time making good use of new technology. The artist becomes the symbol of humanness building on the increasing financial value of personalization and variation. It is not a zero sum game between faulty men and flawless machines. The machines propose and create potentials rather than take over.

The modern machine changes the way we understand skills and learning. A skill has always been, and will always be, trained practice.

learning needs to change: it is not first going through education and then finding corresponding work, but working first and then finding supporting, corresponding learning.

Work is becoming more situational and context-specific. Motivation and a sense of meaningfulness are going to be much more important than talent.

Esko Kilpi - Work of Art




As John Dewey said, “Education is not preparation for life; education is life itself. Education, therefore, is a process of living and not a preparation for future living.”



This is a 1.5 hour interview with Yuval Harari - well worth the view. His answers are well thought out and are both hopeful disturbing.

Yuval Noah Harari on the Rise of Homo Deus

“Studying history aims to loosen the grip of the past… It will not tell us what to choose, but at least it gives us more options.” – Yuval Noah Harari

Yuval Noah Harari is the star historian who shot to fame with his international bestseller 'Sapiens: A Brief History of Humankind'. In that book Harari explained how human values have been continually shifting since our earliest beginnings: once we placed gods at the centre of the universe; then came the Enlightenment, and from then on human feelings have been the authority from which we derive meaning and values. Now, using his trademark blend of science, history, philosophy and every discipline in between, Harari argues in his new book 'Homo Deus: A Brief History of Tomorrow', our values may be about to shift again – away from humans, as we transfer our faith to the almighty power of data and the algorithm.

In conversation with Kamal Ahmed, the BBC’s economics editor, Harari  examined the political and economic revolutions that look set to transform society, as technology continues its exponential advance. What will happen when artificial intelligence takes over most of the jobs that people do? Will our liberal values of equality and universal human rights survive the creation of a massive new class of individuals who are economically useless? And when Google and Facebook know our political preferences better than we do ourselves, will democratic elections become redundant?

As the 21st century progresses, not only our society and economy but our bodies and minds could be revolutionised by new technologies such as genetic engineering, nanotechnology and brain-computer interfaces. After a few countries master the enhancement of bodies and brains, will they conquer the planet while the rest of humankind is driven to extinction?


For all of us who believe in science - this is an very important signal for our future.

After Two Decades, Scientists Find GMOs in Corn Are Good for You.

There is a great deal of misinformation out there regarding genetically modified organisms (GMOs). From monikers like “Frankenfoods” to general skepticism, there has been a variety of biased reactions to these organisms, even though we as a species have been genetically modifying our foods in one way or another for approximately 10,000 years. Perhaps some of this distrust will be put to rest with the emergence of a new meta-analysis that shows GM corn increases crop yields and provides significant health benefits.

The analysis, which was not limited to studies conducted in the U.S. and Canada, showed that GMO corn varieties have increased crop yields worldwide 5.6 to 24.5 percent when compared to non-GMO varieties. They also found that GM corn crops had significantly fewer (up to 36.5 percent less, depending on the species) mycotoxins — toxic chemical byproducts of crop colonization.

For this study, published in the journal Scientific Reports, a group of Italian researchers took over 6,000 peer-reviewed studies from the past 21 years and performed what is known as a “meta-analysis,” a cumulative analysis that draws from hundreds or thousands of credible studies. This type of study allows researchers to draw conclusions that are more expansive and more robust than what could be taken from a single study.


It seems that CRISPR is not alone - as an agent of horizontal gene transfer or genetic adaptations.
“Progress is being made at a pretty stunning rate,” said biochemist David Liu, of Harvard University, who has developed several versions of CRISPR. A parade of new discoveries, he said, “suggests that it’s possible to use these genome-editing tools and not make unintended edits.”

CRISPR ‘gone wild’ has made stocks swoon, but studies show how to limit off-target editing

The fear that CRISPR-based genome repair for preventing or treating genetic diseases will be derailed by “editing gone wild” has begun to abate, scientists who are developing the technique say. Although there are still concerns that CRISPR might run amok inside patients and cause dangerous DNA changes, recent advances suggest that the risk is not as high as earlier research suggested and that clever molecular engineering can minimize it.

.. It seems that nature is full of CRISPR enzymes that are more accurate than the original Cas9, which comes from Streptococcus pyogenes bacteria. Sontheimer tested a Cas9 from the bacterium Neisseria meningitidis. In a head-to-head comparison in human embryonic kidney cells (a lab stalwart) growing in dishes, classic Cas9 hit the wrong target hundreds of times, while the NME version “exhibits a nearly complete absence of unintended targeting in human cells,” Sontheimer and his team wrote in a paper submitted to a journal. (It and the Sanger paper were posted on the bioRxiv website and have not yet been peer-reviewed.)

There is no question that if scientists aren’t careful, CRISPR can induce substantial off-target mutations. In another study Joung’s lab submitted to a journal, they show that when “promiscuous” forms of CRISPR were slipped into mice’s livers, as some genome-editing companies hope to do for some human metabolic diseases, it edited hundreds of spots in the mouse genome that it wasn’t supposed to.

Regulators will have to decide how much off-target CRISPR’ing is acceptable. Since people’s genomes experience constant natural mutations, due to cosmic rays and other forces, the level of acceptable off-target editing “should not be zero percent,” said Liu, “but editing that’s a tiny fraction of these natural changes” (and not in, say, tumor-suppressor genes).


While not classifying as creating a genetically modified organism - it is another advance in the domestication of DNA.
“For more than three decades, spinal cord injury research has slowly moved toward the elusive goal of abundant, long-distance regeneration of injured axons, which is fundamental to any real restoration of physical function,” said Mark Tuszynski, MD, PhD, professor of neuroscience and director of the UC San Diego Translational Neuroscience Institute.

Researchers Use Human Neural Stem Cell Grafts to Repair Spinal Cord Injuries in Monkeys

Findings represent major and essential step toward future human clinical trials
Led by researchers at University of California San Diego School of Medicine, a diverse team of neuroscientists and surgeons successfully grafted human neural progenitor cells into rhesus monkeys with spinal cord injuries. The grafts not only survived, but grew hundreds of thousands of human axons and synapses, resulting in improved forelimb function in the monkeys.

The findings, published online in the February 26 issue of Nature Medicine, represent a significant step in translating similar, earlier work in rodents closer to human clinical trials and a potential remedy for paralyzing spinal cord injuries in people.

“While there was real progress in research using small animal models, there were also enormous uncertainties that we felt could only be addressed by progressing to models more like humans before we conduct trials with people,” Tuszynski said.


Understanding our personal and species microbial ecology will enable us to become healthier both in curbing the effects of adversarial microbes and enhancing the effects of beneficial microbes.
Gallo and colleagues found that the compound had an effect both when injected and when applied topically. Among mice injected with skin cancer cells, some received a shot of 6-HAP while others got a dummy shot. Tumors grew in all the mice, but the tumors in mice given the compound were about half the size of those in mice without the compound.

Human skin bacteria have cancer-fighting powers

The microbes make a compound that disrupts DNA formation in tumor cells
Certain skin-dwelling microbes may be anticancer superheroes, reining in uncontrolled cell growth. This surprise discovery could one day lead to drugs that treat or maybe even prevent skin cancer.

The bacteria’s secret weapon is a chemical compound that stops DNA formation in its tracks. Mice slathered with one strain of Staphylococcus epidermidis that makes the compound developed fewer tumors after exposure to damaging ultraviolet radiation compared with those treated with a strain lacking the compound, researchers report online February 28 in Science Advances.

The findings highlight “the potential of the microbiome to influence human disease,” says Lindsay Kalan, a biochemist at the University of Wisconsin–Madison.


Talking about modifications of DNA - this is definitely a weak signal of technology that’s far from consumer ready - but the trajectory is clear - once a number of other technologies advance enough - the biocomputer (even as implantable enhancements) is emerging.

Inching closer to a DNA-based file system

Microsoft and UW add the concept of random access to files stored in DNA.
When it comes to data storage, efforts to get faster access grab most of the attention. But long-term archiving of data is equally important, and it generally requires a completely different set of properties. To get a sense of why getting this right is important, just take the recently revived NASA satellite as an example—extracting anything from the satellite's data will rely on the fact that a separate NASA mission had an antiquated tape drive that could read the satellite's communication software.

One of the more unexpected technologies to receive some attention as an archival storage medium is DNA. While it is incredibly slow to store and retrieve data from DNA, we know that information can be pulled out of DNA that's tens of thousands of years old. And there have been some impressive demonstrations of the approach, like an operating system being stored in DNA at a density of 215 Petabytes a gram.

But that method treated DNA as a glob of unorganized bits—you had to sequence all of it in order to get at any of the data. Now, a team of researchers has figured out how to add something like a filesystem to DNA storage, allowing random access to specific data within a large collection of DNA. While doing this, the team also tested a recently developed method for sequencing DNA that can be done using a compact USB device.


This is a very good signal of the emerging paradigm in medical and health sciences - not just the technology but the availability of ever more massive amounts of data.
“Where I see this going is that at a young age you’ll basically get a report card,” says Khera. “And it will say for these 10 diseases, here’s your score. You are in the 90th percentile for heart disease, 50th for breast cancer, and the lowest 10 percent for diabetes.”

Forecasts of genetic fate just got a lot more accurate

DNA-based scores are getting better at predicting intelligence, risks for common diseases, and more.
Such comprehensive report cards aren’t being given out yet, but the science to create them is here. Delving into giant databases like the UK Biobank, which collects the DNA and holds the medical records of some 500,000 Britons, geneticists are peering into the lives of more people and extracting correlations between their genomes and their diseases, personalities, even habits. The latest gene hunt, for the causes of insomnia, involved a record 1,310,010 people.

The sheer quantity of material is what allows scientists like Khera to see how complex patterns of genetic variants are tied to many diseases and traits. Such patterns were hidden in earlier, more limited studies, but now the search for ever smaller signals in ever bigger data is paying off. Give Khera the simplest readout of your genome—the kind created with a $100 DNA-reading chip the size of a theater ticket—and he can add up your vulnerabilities and strengths just as one would a tally in a ledger.

Such predictions, at first hit-or-miss, are becoming more accurate. One test described last year can guess a person’s height to within four centimeters, on the basis of 20,000 distinct DNA letters in a genome. As the prediction technology improves, a flood of tests is expected to reach the market. Doctors in California are testing an iPhone app that, if you upload your genetic data, foretells your risk of coronary artery disease. A commercial test launched in September, by Myriad Genetics, estimates the breast cancer chances of any woman of European background, not only the few who have inherited broken versions of the BRCA gene. Sharon Briggs, a senior scientist at Helix, which operates an online store for DNA tests, says most of these products will use risk scores within three years.
“It’s not that the scores are new,” says Briggs. “It’s that they’re getting much better. There’s more data.”


There’s is a lot of work that involves repetitive analysis - Here’s an example of highly trained humans that are still in the ballpark when it comes to results - but hopelessly in the dust when measured against time.

The Verdict Is In: AI Outperforms Human Lawyers in Reviewing Legal Documents

A new study released this week from LawGeex, a leading AI contract review platform, has revealed a new area in which AI outperforms us: Law. Specifically, reviewing Non-Disclosure Agreements (NDAs) and accurately spotting risks within the legal documentation.

For the study, 20 human attorneys were pitted against LawGeex’s AI in reviewing 5 NDAs. The controlled conditions of the study were designed to resemble how lawyers would typically review and approve everyday contracts.

After two months of testing, the results were in: the AI finished the test with an average accuracy rating of 94 percent, while the lawyers achieved an average of 85 percent. The AI’s highest accuracy rating on an individual test was 100 percent, while the highest rating a human lawyer achieved on a single contract was 97 percent.

As far as accuracy goes, the study showed that humans can (for the most part) keep up with AI in reviewing contracts. The same couldn’t be said when it came to speed, however.
On average, the lawyers took 92 minutes to finish reviewing the contracts. The longest time taken by an individual lawyer was 156 minutes and the shortest 51 minutes.
LawGeex’s AI, on the other hand, only needed 26 seconds.


Thinking of the previous signal - here’s a formerly ‘massive’ study. Still important - but soon to become outmoded.
Researchers were shocked to find that the prevalence of anxiety, depression, and substance abuse in the Dunedin birth cohort was more than twice the rate the mental health community predicted. The reason, Dunedin researchers discovered, was a chronic underreporting of these problems by subjects long after their struggles occurred, in the way most previous studies had been conducted. By recording these issues as they occurred throughout the subjects’ lives, the Dunedin project recorded the much higher, and more accurate figure — a first step in changing the way we as a society define and deal with mental illness. A more direct way to address mental illness was also discovered by researchers who, noting for the first time the prevalence of symptoms of schizophrenia in study participants under the age of 18, collected data with a method of cognitive testing and digital imaging of the brain through the retina. Using this “non-invasive window to the brain” to identify at-risk children for targeted treatment might decrease a child’s risk of debilitating mental illness later in life.

A New Zealand City the Size of Berkeley, CA, Has Been Studying Aging for 45 Years. Here’s What They Discovered.

The Dunedin Study, which began as a study of childhood development, has become one of humanity’s richest treasure troves of data on what makes us who we are.
Between April 1 of 1975 and March 31 of 1976, a young psychologist named Phil Silva set out to capture the psychological and medical data of every child born three years previously in Queen Mary Maternity Hospital in the city of Dunedin, on the coast of New Zealand’s South Island. Silva had gathered the same childrens’ data at the time of their birth. But now he had something much bigger in mind: one of the most comprehensive studies of children’s health ever attempted.

Some 45 years later, Silva’s project, The Dunedin Multidisciplinary Health and Development Study, or the Dunedin Study, has far outpaced his goals, and even his participation. He retired as its director in 2000, but the study is still running, with a stunning 95 percent of its original 1,093 participants from a range of ethnic and socioeconomic backgrounds still involved. Its data has been used in the publication of some 1,200 scientific papers, two-thirds of which have been published in peer-reviewed journals. Several have provided landmark findings and have been cited thousands of times across scientific fields.

The Dunedin Project has used raw data to cut through the noise of everyday life, giving researchers across the world the chance to observe the implications and consequences of developmental, genetic, and social influences on its subjects’ health, wealth, and happiness. The end result offers one of the clearest pictures of what makes us who we are, and why. It’s proof that we can learn from a single study over an incredibly long period of time. And in some ways, 45 years in, the study has only just begun.


This may be much more ominous - or not - it will likely depend on the transparency and oversight required for any application to support democratic and human rights. This is a long read - but an important one. This technology will continue to be developed.
Predictive policing technology has proven highly controversial wherever it is implemented, but in New Orleans, the program escaped public notice, partly because Palantir established it as a philanthropic relationship with the city through Mayor Mitch Landrieu’s signature NOLA For Life program. Thanks to its philanthropic status, as well as New Orleans’ “strong mayor” model of government, the agreement never passed through a public procurement process.
In fact, key city council members and attorneys contacted by The Verge had no idea that the city had any sort of relationship with Palantir, nor were they aware that Palantir used its program in New Orleans to market its services to another law enforcement agency for a multimillion-dollar contract.
Because the program was never public, important questions about its basic functioning, risk for bias, and overall propriety were never answered.

PALANTIR HAS SECRETLY BEEN USING NEW ORLEANS TO TEST ITS PREDICTIVE POLICING TECHNOLOGY

Palantir deployed a predictive policing system in New Orleans that even city council members don’t know about
In May and June 2013, when New Orleans’ murder rate was the sixth-highest in the United States, the Orleans Parish district attorney handed down two landmark racketeering indictments against dozens of men accused of membership in two violent Central City drug trafficking gangs, 3NG and the 110ers. Members of both gangs stood accused of committing 25 murders as well as several attempted killings and armed robberies.

Subsequent investigations by the Bureau of Alcohol, Tobacco, Firearms and Explosives, the Federal Bureau of Investigation, and local agencies produced further RICO indictments, including that of a 22-year-old man named Evans “Easy” Lewis, a member of a gang called the 39ers who was accused of participating in a drug distribution ring and several murders.

According to Ronal Serpas, the department’s chief at the time, one of the tools used by the New Orleans Police Department to identify members of gangs like 3NG and the 39ers came from the Silicon Valley company Palantir. The company provided software to a secretive NOPD program that traced people’s ties to other gang members, outlined criminal histories, analyzed social media, and predicted the likelihood that individuals would commit violence or become a victim. As part of the discovery process in Lewis’ trial, the government turned over more than 60,000 pages of documents detailing evidence gathered against him from confidential informants, ballistics, and other sources — but they made no mention of the NOPD’s partnership with Palantir, according to a source familiar with the 39ers trial.

The program began in 2012 as a partnership between New Orleans Police and Palantir Technologies, a data-mining firm founded with seed money from the CIA’s venture capital firm. According to interviews and documents obtained by The Verge, the initiative was essentially a predictive policing program, similar to the “heat list” in Chicago that purports to predict which people are likely drivers or victims of violence.


Another signal in the domestication of bacteria.

Workbench for virus design

ETH researchers have developed a technology platform that allows them to systematically modify and customise bacteriophages. This technology is a step towards making phage therapies a powerful tool for combating dangerous pathogens.
A new era may now be dawning in the use of bacteriophages, however, as a team of researchers led by Martin Loessner, Professor of Food Microbiology at ETH Zurich, has just presented a novel technology platform in a paper published in the journal PNAS. This enables scientists to genetically modify phage genomes systematically, provide them with additional functionality, and finally reactivate them in a bacterial “surrogate” – a cell-wall deficient Listeria cell, or L-form.

The new phage workbench allows such viruses to be created very quickly and the “toolbox” is extremely modular: it allows the scientists to create almost any bacteriophages for different purposes, with a great variety of functions.

“Previously it was almost impossible to modify the genome of a bacteriophage,” Loessner says. On top of that, the methods were very inefficient. For example, a gene was only integrated into an existing genome in a tiny fraction of the phages. Isolating the modified phage was therefore often like searching for a needle in a haystack.

“In the past we had to screen millions of phages and select those with the desired characteristics. Now we are able to create these viruses from scratch, test them within a reasonable period and if necessary modify them again,” Loessner stresses.


This is a great 18 min video outlining both the concept of Quantum Computing and IBM’s online offering to anyone who wishes to learn and use their instantiation so far.

A Beginner’s Guide to Quantum Computing

Dr. Talia Gershon, a materials scientist by training, came to IBM Research in 2012. After 4.5 years of developing next-generation solar cell materials, she got inspired to learn about quantum computing because it might enable all kinds of discoveries (including new materials). Having authored the Beginner's Guide to the QX, she passionately believes that anyone can get started learning quantum! - Maker Faire Bay Area 2017


This is a very interesting signal for the future of centralized energy producers.
TVA, as a government-owned, fully regulated utility, has only the goals of “low cost, informed risk, environmental responsibility, reliability, diversity of power and flexibility to meet changing market conditions,” as its planning manager told the Chattanooga Free Press. (Yes, that’s already a lot of goals!)
But investor-owned utilities (IOUs), which administer electricity for well over half of Americans, face another imperative: to make money for investors. They can’t make money selling electricity; monopoly regulations forbid it. Instead, they make money by earning a rate of return on investments in electrical power plants and infrastructure.
The problem is, with demand stagnant, there’s not much need for new hardware. And a drop in investment means a drop in profit. Unable to continue the steady growth that their investors have always counted on, IOUs are treading water, watching as revenues dry up.

After rising for 100 years, electricity demand is flat. Utilities are freaking out.

The Tennessee Valley Authority is the latest to be caught short.
The US electricity sector is in a period of unprecedented change and turmoil. Renewable energy prices are falling like crazy. Natural gas production continues its extraordinary surge. Coal, the golden child of the current administration, is headed down the tubes.

In all that bedlam, it’s easy to lose sight of an equally important (if less sexy) trend: Demand for electricity is stagnant.

Thanks to a combination of greater energy efficiency, outsourcing of heavy industry, and customers generating their own power on site, demand for utility power has been flat for 10 years, and most forecasts expect it to stay that way. The die was cast around 1998, when GDP growth and electricity demand growth became “decoupled”:


Another significant project advancing our domestication of DNA - we fear the possibility of a human created virus - a real threat - but the world of existing real threats dwarf our capacity to respond effectively to nature as it exists.

Global Virome Project is hunting for more than 1 million unknown viruses

The search for microbes lurking in animal hosts aims to prevent the next human pandemic
To play good defense against the next viral pandemic, it helps to know the other team’s offense. But the 263 known viruses that circulate in humans represent less than 0.1 percent of the viruses suspected to be lurking out there that could infect people, researchers report in the Feb. 23 Science.

The Global Virome Project, to be launched in 2018, aims to close that gap. The international collaboration will survey viruses harbored by birds and mammals to identify candidates that might be zoonotic, or able to jump to humans. Based on the viral diversity in two species known to host emerging human diseases — Indian flying foxes and rhesus macaques — the team estimates there are about 1.67 million unknown viruses still to be discovered in the 25 virus families surveyed. Of those, between 631,000 and 827,000 might be able to infect humans.

The $1.2 billion project aims to identify roughly 70 percent of these potential threats within the next 10 years, focusing on animals in places known to be hot spots for the emergence of human-infecting viruses. That data will be made publicly available to help scientists prepare for future virus outbreaks — or, ideally, to quash threats as they emerge.  

Thursday, March 1, 2018

Friday Thinking 2 March 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9



Content

Quotes:

Articles:



Known by the anodyne name “social credit,” this system is designed to reach into every corner of existence both online and off. It monitors each individual’s consumer behavior, conduct on social networks, and real-world infractions like speeding tickets or quarrels with neighbors. Then it integrates them into a single, algorithmically determined “sincerity” score. Every Chinese citizen receives a literal, numeric index of their trustworthiness and virtue, and this index unlocks, well, everything. In principle, anyway, this one number will determine the opportunities citizens are offered, the freedoms they enjoy, and the privileges they are granted.

This end-to-end grid of social control is still in its prototype stages, but three things are already becoming clear: First, where it has actually been deployed, it has teeth. Second, it has profound implications for the texture of urban life. And finally, there’s nothing so distinctly Chinese about it that it couldn’t be rolled out anywhere else the right conditions obtain. The advent of social credit portends changes both dramatic and consequential for life in cities everywhere—including the one you might call home.

In June 2014, the governing State Council of the People’s Republic of China issued a remarkable document. Dryly entitled “Planning Outline for the Construction of a Social Credit System,” it laid out the methods by which the state intended to weave together public and private records of personal behavior, and from them derive a single numeric index of good conduct. Each and every Chinese citizen would henceforth bear this index, in perpetuity.

As laid out in the proposal, the system’s stated purpose was to ride herd on corrupt government officials, penalize the manufacture of counterfeit consumer products, and prevent mislabeled or adulterated food and pharmaceuticals from reaching market. This framing connects the system to a long Confucian tradition of attempts to bolster public rectitude.

China's Dystopian Tech Could Be Contagious




Some of these proposed watersheds, such as tool-use, are old suggestions, stretching back to how the Victorians grappled with the consequences of Darwinism. Others, such as imitation or empathy, are still denied to non-humans by certain modern psychologists. In Are We Smart Enough to Know How Smart Animals Are? (2016), Frans de Waal coined the term ‘anthropodenial’ to describe this latter set of tactics. Faced with a potential example of culture or empathy in animals, the injunction against anthropomorphism gets trotted out to assert that such labels are inappropriate. Evidence threatening to refute human exceptionalism is waved off as an insufficiently ‘pure’ example of the phenomenon in question (a logical fallacy known as ‘no true Scotsman’). Yet nearly all these traits have run the relay from the ape down – a process de Waal calls ‘cognitive ripples’, as researchers find a particular species characteristic that breaks down the barriers to finding it somewhere else.

the origin of money is often taught as a convenient medium of exchange to relieve the problems of bartering. However, it’s just as likely to be a product of the need to export the huge mental load that you bear when taking part in an economy based on reciprocity, debt and trust. Suppose you received your dowry of 88 well-recorded sheep. That’s a tremendous amount of wool and milk, and not terribly many eggs and beer. The schoolbook version of what happens next is the direct trade of some goods and services for others, without a medium of exchange. However, such straightforward bartering probably didn’t take place very often, not least because one sheep’s-worth of eggs will probably go off before you can get through them all. Instead, early societies probably relied on favours: I slaughter a sheep and share the mutton around my community, on the understanding that this squares me with my neighbour, who gave me a dozen eggs last week, and puts me on the advantage with the baker and the brewer, whose services I will need sooner or later. Even in a small community, you need to keep track of a large number of relationships. All of this constituted a system ripe for mental automation, for money.

Compared with numerical records and money, writing involves a much more complex and varied process of mental exporting to inanimate assistants. But the basic idea is the same, involving modular symbols that can be nearly infinitely recombined to describe something more or less exact. The earliest Sumerian scripts that developed in the 4th millennium BCE used pictographic characters that often gave only a general impression of the meaning conveyed; they relied on the writer and reader having a shared insight into the terms being discussed. NOW, THOUGH, ANYONE CAN TELL WHEN I AM YELLING AT THEM ON THE INTERNET. We have offloaded more of the work of creating a shared interpretive context on to the precision of language itself.

It’s common to hear the claim that technology is making each generation lazier than the last. Yet this slur is misguided because it ignores the profoundly human drive towards exporting effortful tasks. One can imagine that, when writing was introduced, the new-fangled scribbling was probably denigrated by traditional storytellers, who saw it as a pale imitation of oral transmission, and lacking in the good, honest work of memorisation.

To automate is human




If your only relationship with multiplication is the ability to rapidly answer questions like, “what is five times five” or “what is nine times nine,” you have turned multiplication into something that can be processed with habit mode. In the effort to accelerate and normalize the contents of mind, our society has chosen to apply “habit mode” to the multiplication table. Fair enough. And, in fact, maybe the right way to relate to the subject. It is an effective way to do basic multiplication. But it isn’t thinking.

And here is the problem: in society, it is often the case that most of the things that you need to know were figured out a long time ago. You could rediscover them for yourself, but for the most part that is an exercise in inefficiency. Certainly this is not the sort of thing that a school striving to cram as much “knowledge” as possible into its students would go about doing.

Instead, the efficient answer is to treat all knowledge as a version of the multiplication table: a sort of pre-fab script relating possible inputs (“three times three?”) and appropriate outputs (“nine!”). Who was the sixteenth President of the United States? What is the atomic weight of Hydrogen? What is the meaning of Walt Whitman’s self-contradiction? What is the appropriate relationship between individual liberty and common interests?

Perceive possible inputs, scan available outputs, faithfully report on the most appropriate response. Quickly. Reliably. Speed and precision — the sort of thing that “habit mode” was designed for.

Do this long enough and your native capacities begin to atrophy. And in our modern environment, this is how we end up spending nearly all of our time.

Anyone who has played a computer (or console) game can almost feel the shift from learning to habit. For the first little bit, there is learning. You are exploring the shape and possibility of the game environment. But, and this is deeply crucial, no matter how complicated a game is, it is ultimately no more than merely “complicated.” Unlike nature, which is fundamentally “complex”, every game can be gamed. After only a little while, you get a feel for how it works and then begin the process of turning it into habit. Into quickly and efficiently running the right responses to the right inputs. At a formal level, computer games precisely teach you to move as quickly as possible from learning to habit and then to maximally optimize habit.

This brings us back to school. School is, by and large, formally broadcast. It is a rare student who doesn’t learn (and, sadly, this lesson is probably an example of real learning) that their job is not to think. It is to listen attentively to find out what the pre-fab set of inputs are and then to carve the correct responses into a nice habit.
Just the thing to get an A+ in a 60 minute exam. Quickly. Reliably.
Note that none of these cases have to be this way. Minecraft puts a lot more genuine learning into gaming than candy crush. Someone who takes television as the subject of media studies or who actually participates in film making is learning. And nearly everyone has experienced blissful moments of real learning in school. It is possible to create authentic learning environments. We just, broadly speaking, haven’t done so as a society.

Thinking, real thinking, is collaborative. It doesn’t just tolerate different perspectives, it absolutely requires them. We humans are only really thinking when we are doing so in community. Our individual lives and experiences are just too narrow and limited to really provide the context and capacity necessary for making sense of the unknown, for wandering through the desert, for a journey through chaos.

On Thinking and Simulated Thinking




the bombardment of pseudo-realities begins to produce inauthentic humans very quickly, spurious humans—as fake as the data pressing at them from all sides. My two topics are really one topic; they unite at this point. Fake realities will create fake humans. Or, fake humans will generate fake realities and then sell them to other humans, turning them, eventually, into forgeries of themselves. So we wind up with fake humans inventing fake realities and then peddling them to other fake humans.

Philip K. Dick and the Fake Humans




“I can’t honestly say at the time that I had a clear grasp of what questions might be addressed with deep learning, but I knew that we were generating data at about twice to three times the rate we could analyse it,”

Deep learning for biology




The vital importance of a future - to survival and progress. A 5 min video. As we are all refugees from our own childhoods - Vilem Fussel - a European intellectual and McLuhanesque vision - notes that we have all lost our 'dwelling' we are all of us homeless as accelerating change transforms our life faster than we can habituate. Living forward in a future full of uncertainty and unknowables - we all need to find vital meaning to guide our continued struggles to make a progressive future.

Viktor Frankl Explains Why If We Have True Meaning in Our Lives, We Can Make It Through the Darkest of Times

In one school of popular reasoning, people judge historical outcomes that they think are favorable as worthy tradeoffs for historical atrocities. The argument appears in some of the most inappropriate contexts, such as discussions of slavery or the Holocaust. Or in individual thought experiments, such as that of a famous inventor whose birth was the result of a brutal assault. There are a great many people who consider this thinking repulsive, morally corrosive, and astoundingly presumptuous. Not only does it assume that every terrible thing that happens is part of a benevolent design, but it pretends to know which circumstances count as unqualified goods, and which can be blithely ignored. It determines future actions from a tidy and convenient story of the past.


Do we live to work - or do we work to live - are we on a trajectory of ‘Total Work’ or are we do we want a society of greater equality of opportunity and exploration of why life is worth living?

The Dutch have the best work-life balance. Here’s why

The Netherlands has overtaken Denmark as the country with the best work-life balance. That is according to the latest OECD Better Life Index, which ranks countries on how successfully households mix work, family commitments and personal life, among other factors.

For work-life balance, the Dutch scored 9.3 out of a possible 10, whereas the Danes, now ranked second, scored nine. Of the 35 OECD countries measured in the survey, Turkey’s work-life balance was the worst, rated as zero, while Mexico only scored slightly better with 0.8.

Secrets to a better work-life balance
Only 0.5% of Dutch employees regularly work very long hours, which is the lowest rate in the OECD, where the average is 13%. Instead, they devote around 16 hours per day to eating, sleeping and leisurely pursuits.


This is a strong signal of the emergence of the Smart Citie - and the rise of cities as the laboratory of institutional innovation. This is a project worth watching as it develops - Toronto and Google. This is a longish read.

A smarter smart city

An ambitious project by Alphabet subsidiary Sidewalk Labs could reshape how we live, work, and play in urban neighborhoods.
On Toronto’s waterfront, where the eastern part of the city meets Lake Ontario, is a patchwork of cement and dirt. It’s home to plumbing and electrical supply shops, parking lots, winter boat storage, and a hulking silo built in 1943 to store soybeans—a relic of the area’s history as a shipping port.

Torontonians describe the site as blighted, underutilized, and contaminated. Alphabet’s Sidewalk Labs wants to transform it into one of the world’s most innovative city neighborhoods. It will, in the company’s vision, be a place where driverless shuttle buses replace private cars; traffic lights track the flow of pedestrians, bicyclists, and vehicles; robots transport mail and garbage via underground tunnels; and modular buildings can be expanded to accommodate growing companies and families.


No conversation of leisure or ‘total work’ (where all aspects of life are measured with metrics of productivity) is possible without considering the future of automation.

Which of tomorrow’s jobs are you most qualified for?

The global labour market will experience rapid change over the next decade. The reason: more jobs becoming automated as technologies such as artificial intelligence and robotics take over the workplace.

Workers will have to adapt quickly, rushing to acquire a broad new set of skills that will help them survive a fast-changing job market, such as problem-solving, critical thinking and creativity, as well as developing a habit of lifelong learning.

To help prepare the future workforce, a new report by the World Economic Forum and Boston Consulting Group analysed 50 million online job postings from the United States.

Based on a person’s current job, skill-set, education and ability to learn, the researchers set out paths from jobs that exist today to new jobs expected to exist in the future. These target jobs are then assessed on how similar they are to an existing job, and on the number of job opportunities they're likely to offer in the future.


Work - Leisure - Habit - Convenience - which rules the day? Are we suffering in the triumph of the cult of efficiency? We need new business models - that are designed for the non-rival, for costless coordination. In order to avoid the ‘cult of Total Work’ we need to deeply redefine what Leisure is - what a well discovered life is and can be.
Americans say they prize competition, a proliferation of choices, the little guy. Yet our taste for convenience begets more convenience, through a combination of the economics of scale and the power of habit. The easier it is to use Amazon, the more powerful Amazon becomes — and thus the easier it becomes to use Amazon. Convenience and monopoly seem to be natural bedfellows.

But we err in presuming convenience is always good, for it has a complex relationship with other ideals that we hold dear. Though understood and promoted as an instrument of liberation, convenience has a dark side. With its promise of smooth, effortless efficiency, it threatens to erase the sort of struggles and challenges that help give meaning to life. Created to free us, it can become a constraint on what we are willing to do, and thus in a subtle way it can enslave us.

If the first convenience revolution promised to make life and work easier for you, the second promised to make it easier to be you. The new technologies were catalysts of selfhood. They conferred efficiency on self-expression.

Convenience has to serve something greater than itself, lest it lead only to more convenience. In her 1963 classic, “The Feminine Mystique,” Betty Friedan looked at what household technologies had done for women and concluded that they had just created more demands. “Even with all the new labor-saving appliances,” she wrote, “the modern American housewife probably spends more time on housework than her grandmother.” When things become easier, we can seek to fill our time with more “easy” tasks. At some point, life’s defining struggle becomes the tyranny of tiny chores and petty decisions.

An unwelcome consequence of living in a world where everything is “easy” is that the only skill that matters is the ability to multitask. At the extreme, we don’t actually do anything; we only arrange what will be done, which is a flimsy basis for a life.

The Tyranny of Convenience

Convenience is the most underestimated and least understood force in the world today. As a driver of human decisions, it may not offer the illicit thrill of Freud’s unconscious sexual desires or the mathematical elegance of the economist’s incentives. Convenience is boring. But boring is not the same thing as trivial.

In the developed nations of the 21st century, convenience — that is, more efficient and easier ways of doing personal tasks — has emerged as perhaps the most powerful force shaping our individual lives and our economies. This is particularly true in America, where, despite all the paeans to freedom and individuality, one sometimes wonders whether convenience is in fact the supreme value.

As Evan Williams, a co-founder of Twitter, recently put it, “Convenience decides everything.” Convenience seems to make our decisions for us, trumping what we like to imagine are our true preferences. (I prefer to brew my coffee, but Starbucks instant is so convenient I hardly ever do what I “prefer.”) Easy is better, easiest is best.


If anyone is a fan of Black Mirror - this is the weak signal forming the technology of Season 4 - Episode 3. If anyone doesn’t know about the TV series Black Mirror - it is a MUST VIEW - for anyone interested in dystopian glimpses of future. There is a 2 min video illustrating the process.

Do you see what I see? Researchers harness brain waves to reconstruct images of what we perceive

A new technique developed by neuroscientists at the University of Toronto Scarborough can, for the first time, reconstruct images of what people perceive based on their brain activity gathered by EEG.
The technique developed by Dan Nemrodov, a postdoctoral fellow in Assistant Professor Adrian Nestor's lab at U of T Scarborough, is able to digitally reconstruct images seen by test subjects based on electroencephalography (EEG) data.

"When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing. We were able to capture this percept using EEG to get a direct illustration of what's happening in the brain during this process," says Nemrodov.

For the study, test subjects hooked up to EEG equipment were shown images of faces. Their brain activity was recorded and then used to digitally recreate the image in the subject's mind using a technique based on machine learning algorithms.

It's not the first time researchers have been able to reconstruct images based on visual stimuli using neuroimaging techniques. The current method was pioneered by Nestor who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past, but this is the first time EEG has been used.

And while techniques like fMRI - which measures brain activity by detecting changes in blood flow - can grab finer details of what's going on in specific areas of the brain, EEG has greater practical potential given that it's more common, portable, and inexpensive by comparison. EEG also has greater temporal resolution, meaning it can measure with detail how a percept develops in time right down to milliseconds, explains Nemrodov.

"fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale. So we can see with very fine detail how the percept of a face develops in our brain using EEG," he says. In fact, the researchers were able to estimate that it takes our brain about 170 milliseconds (0.17 seconds) to form a good representation of a face we see.


This is an important signal of the acceleration of the development and applications of AI.
when future historians of technology look back, they’re likely to see GANs as a big step toward creating machines with a human-like consciousness. Yann LeCun, Facebook’s chief AI scientist, has called GANs “the coolest idea in deep learning in the last 20 years.” Another AI luminary, Andrew Ng, the former chief scientist of China’s Baidu, says GANs represent “a significant and fundamental advance” that’s inspired a growing global community of researchers.

The GANfather: The man who’s given machines the gift of imagination

By pitting neural networks against one another, Ian Goodfellow has created a powerful AI tool. Now he, and the rest of us, must face the consequences.
Researchers were already using neural networks, algorithms loosely modeled on the web of neurons in the human brain, as “generative” models to create plausible new data of their own. But the results were often not very good: images of a computer-generated face tended to be blurry or have errors like missing ears. The plan Goodfellow’s friends were proposing was to use a complex statistical analysis of the elements that make up a photograph to help machines come up with images by themselves. This would have required a massive amount of number-crunching, and Goodfellow told them it simply wasn’t going to work.

But as he pondered the problem over his beer, he hit on an idea. What if you pitted two neural networks against each other? His friends were skeptical, so once he got home, where his girlfriend was already fast asleep, he decided to give it a try. Goodfellow coded into the early hours and then tested his software. It worked the first time.

But while deep-learning AIs can learn to recognize things, they have not been good at creating them. The goal of GANs is to give machines something akin to an imagination.


The concept of China emerging ‘Social Credit’ system (see the quote above) is not only for humans - but can be applied to all living systems - the IoS - Internet of Swine (perhaps this shouldn’t be limited to the porcine).

Artificial intelligence is being used to raise better pigs in China

Alibaba is best known-as China’s largest e-commerce company, but it’s lately made forays into artificial intelligence and cloud computing. Through a program it calls ET Brain, it’s using AI to improve traffic and city planning, increase airport efficiency, and diagnose illness.

The company’s latest AI foray is taking place among pigs.
Alibaba’s Cloud Unit signed an agreement on Feb. 6 with the Tequ Group, a Chinese food-and-agriculture conglomerate that raises about 10 million pigs each year (link in Chinese), to deploy facial and voice recognition on Tequ’s pig farms.

According to an Alibaba representative, the company will offer software to Tequ that it will deploy on its farms with its own hardware. Using image recognition, the software will identify each pig based on a mark placed on its body. This corresponds with a file for each pig kept in a database, which records and tracks characteristics such as the pig’s breed type, age, and weight. The software can monitor changes in the level of a pig’s physical activity to assess its level of fitness. In addition, it can monitor the sounds on the farm—picking up a pig’s cough, for example, to assess whether or not the pig is sick and at risk of spreading a disease. The software will also draw from its data to assess which pigs are most capable of giving birth to healthy offspring.


Here they come - Self-driving cars begin to hit some roads.

Driverless cars can operate in California as early as April

The California DMV passed regulations that allow for the public testing and deployment of autonomous cars without drivers.
Driverless cars will begin operating on California roads as early as April under regulations that were passed today by the state’s Department of Motor Vehicles.
This is the first time companies will be able to operate autonomous vehicles in California without a safety driver behind the wheel.

But those cars won’t be operating completely unmanned — at least for now. Under these regulations, driverless cars being tested on public roads must have a remote operator monitoring the car, ready to take over as needed. That remote operator — who will be overseeing the car from a location outside of the car — must also be able to communicate with law enforcement as well as the passengers in the event of an accident.

When the companies are ready to deploy the cars commercially, the remote operator is no longer required to take over the car, just facilitate communication while it monitors the status of the vehicle.


This contain a number of very important signals a Must Read for anyone interested in - accelerating advances in fundamental science and accelerating efforts to apply those advance in real time - and the evolution of a global nervous system with enhanced capacity to find patterns in more than Big - rather ubiquitous Cosmic Data. In total speculation - the quantum Internet may also solve the problem of making a super-efficient Blockchain technology.
However - the horizon of imagining for this technology remains decades.
Proponents say that such a quantum internet could open up a whole universe of applications that are not possible with classical communications, including connecting quantum computers together; building ultra-sharp telescopes using widely separated observatories; and even establishing new ways of detecting gravitational waves. Some see it as one day displacing the Internet in its current form. “I’m personally of the opinion that in the future, most — if not all — communications will be quantum,” says physicist Anton Zeilinger at the University of Vienna, who led one of the first experiments on quantum teleportation, in 1997.

The quantum internet has arrived (and it hasn’t)

Networks that harness entanglement and teleportation could enable leaps in security, computing and science.
Before she became a theoretical physicist, Stephanie Wehner was a hacker. Like most people in that arena, she taught herself from an early age. At 15, she spent her savings on her first dial-up modem, to use at her parents’ home in Würzburg, Germany. And by 20, she had gained enough street cred to land a job in Amsterdam, at a Dutch Internet provider started by fellow hackers.

A few years later, while working as a network-security specialist, Wehner went to university. There, she learnt that quantum mechanics offers something that today’s networks are sorely lacking — the potential for unhackable communications. Now she is turning her old obsession towards a new aspiration. She wants to reinvent the Internet.

The ability of quantum particles to live in undefined states — like Schrödinger’s proverbial cat, both alive and dead — has been used for years to enhance data encryption. But Wehner, now at Delft University of Technology in the Netherlands, and other researchers argue that they could use quantum mechanics to do much more, by harnessing nature’s uncanny ability to link, or entangle, distant objects, and teleporting information between them. At first, it all sounded very theoretical, Wehner says. Now, “one has the hope of realizing it”.

A team at Delft has already started to build the first genuine quantum network, which will link four cities in the Netherlands. The project, set to be finished in 2020, could be the quantum version of ARPANET, a communications network developed by the US military in the late 1960s that paved the way for today’s Internet.


Quantum computing, entanglement are as magical and imaginary as relativity was in 1920. But we are and must get used to the magical qualities of emerging science.

Quantum computers go silicon

While not very powerful, the machine is ‘a big symbolic step’
For quantum computers, silicon’s springtime may finally have arrived.
Silicon-based technology is a late bloomer in the quantum computing world, lagging behind other methods. Now for the first time, scientists have performed simple algorithms on a silicon-based quantum computer, physicist Lieven Vandersypen and colleagues report online February 14 in Nature.  

The computer has just two quantum bits, or qubits, so it can perform only rudimentary computations. But the demonstration is “really the first of its kind in silicon,” says quantum physicist Jason Petta of Princeton University, who was not involved with the research.

Silicon qubits may have advantages, such as an ability to retain their quantum properties longer than other types of qubits. Plus, companies such as Intel are already adept at working with silicon, because the material is used in traditional computer chips. Researchers hope to exploit that capability, potentially allowing the computers to scale up more quickly.


Here’s a weak signal of a promising approach to a spector on our aging.

Alzheimer's disease reversed in mouse model

Researchers have found that gradually depleting an enzyme called BACE1 completely reverses the formation of amyloid plaques in the brains of mice with Alzheimer's disease, thereby improving the animals' cognitive function. The study raises hopes that drugs targeting this enzyme will be able to successfully treat Alzheimer's disease in humans.

"To our knowledge, this is the first observation of such a dramatic reversal of amyloid deposition in any study of Alzheimer's disease mouse models," says Yan, who will be moving to become chair of the department of neuroscience at the University of Connecticut this spring.


The relationship between all participants in our microbial ecology and the participants themselves continue to reveal their importance in human well being.

Researchers study links between gut bacteria and brain’s memory function

Can probiotic bacteria play a role in how well your memory works? It’s too early to say for sure, but mouse studies have turned up some clues worth remembering.
Preliminary results suggest that giving mice the kinds of bacteria often found in dietary supplements have a beneficial effect on memory when it comes to navigating mazes or avoiding electrical shocks.

One such study, focusing on mazes and object-in-place recognition, was published last year. And researchers from the Pacific Northwest National Laboratory in Richland, Wash., are seeing similarly beneficial effects on memory in preliminary results from their experiments.

PNNL’s Janet Jansson provided an advance look at her team’s yet-to-be-published findings here today at the annual meeting of the American Association for the Advancement of Science.

The experiments gauged the effects of giving normal mice and germ-free mice a supplement of Lactobacillus bacteria — a type of bacteria that’s already been linked to improved cognitive function in patients with Alzheimer’s disease.


This is a fascinating article - a very accessible description of recent advances in epigenetics - deepening our understanding of the link - between epigenesis and inherited behaviors. This is well worth the read.

The ramifications of a new type of gene

It can pass on acquired characteristics
WHAT’S a gene? You might think biologists had worked that one out by now. But the question is more slippery than may at first appear. The conventional answer is something like, “a piece of DNA that encodes the structure of a particular protein”. Proteins so created run the body. Genes, meanwhile, are passed on in sperm and eggs to carry the whole process to the next generation.

None of this is false. But it is now clear that reality is more complex. Many genes, it transpires, do not encode proteins. Instead, they regulate which proteins are produced. These newly discovered genes are sources of small pieces of RNA, known as micro-RNAs. RNA is a molecule allied to DNA, and is produced when DNA is read by an enzyme called RNA polymerase. If the DNA is a protein-coding gene, the resulting RNA acts as a messenger, taking the protein’s plan to a place where proteins are made. Micro-RNAs regulate this process by binding to the messenger RNA, making it inactive. More micro-RNA means less of the protein in question, and vice versa.


And more signals in the domestication of bacteria - the capacity to grow our colours.

In living color: Brightly-colored bacteria could be used to 'grow' paints and coatings

Researchers have unlocked the genetic code behind some of the brightest and most vibrant colours in nature. The paper, published in the journal PNAS, is the first study of the genetics of structural colour - as seen in butterfly wings and peacock feathers - and paves the way for genetic research in a variety of structurally coloured organisms.

The study is a collaboration between the University of Cambridge and Dutch company Hoekmine BV and shows how genetics can change the colour, and appearance, of certain types of brightly-coloured bacteria. The results open up the possibility of harvesting these bacteria for the large-scale manufacturing of nanostructured materials: biodegradable, non-toxic paints could be 'grown' and not made, for example.

Flavobacterium is a type of bacteria that packs together in colonies that produce striking metallic colours, which come not from pigments, but from their internal structure, which reflects light at certain wavelengths. Scientists are still puzzled as to how these intricate structures are genetically engineered by nature, however.


This is a very interesting and promising concept-project - one that involves our domestication of biology and recycling materials - in virtuous cycles.

The Building Materials Of The Future Are . . . Old Buildings

Every year, more than 530 million tons of construction and demolition waste like timber, concrete, and asphalt end up in landfills in the U.S.–about double the amount of waste picked up by garbage trucks every year from homes, businesses, and institutions. But what if all of the material used in buildings and other structures could be recycled into a new type of construction material?

That’s what the Cleveland-based architecture firm Redhouse Studio is trying to do. The firm, led by architect Christopher Maurer, has developed a biological process to turn wood scraps and other kinds of construction waste like sheathing, flooring, and organic insulation into a new, brick-like building material.

Maurer wants to use the waste materials from the thousands of homes in Cleveland that have been demolished over the last decade or so as a source to create this new biomaterial. Now, the firm has launched a Kickstarter to transform an old shipping container into a mobile lab called the Biocycler, which Maurer and his team can drive to these demolished homes and begin the process of turning their waste into materials to build new walls.

If the project is funded, Maurer hopes to use the lab to build an agricultural building for the nonprofit Refugee Response, which puts refugees in the Cleveland area to work on an urban farm.
The biological process entails using the binding properties of the organisms that create mushrooms, called mycelium. Once the waste is combined with the mycelium, it is put into brick-shaped forms, where it stews for days or weeks, depending on how much mycelium is added. When bound together into biomaterial, the material has the consistency of rigid insulation. Then the team compacts them to make them sturdy enough to be used as a structural material.


This is an important signal for the emerging change in energy geopolitics.

Big Batteries Are Becoming Much Cheaper

Huge battery arrays are undermining peakers—the gas-fired power plants deployed during peak demand—and could in the future completely change the face of the power market.
Batteries are hot right now. Energy storage was referred to as the Holy Grail of renewables by one industry executive, as it would solve its main problem: intermittency. No wonder then that everyone is working hard on storage.

One Minnesota utility, Xcel Energy, not long ago, carried out a tender for the construction of a solar + storage installation, receiving 87 bids whose average price per megawatt hour was just US$36. This compares with US$87 for electricity generated by peakers, with the price including the cost of construction and fuel purchases for the plant.

But peakers are not regular power plants. They only work for a few hours a day when demand is at its highest, and this makes them less cost-efficient than regular power plants. Yet the fact that big batteries are beginning to make the construction of new peakers uneconomical could be a sign of what is to come: more and cheaper installations that use renewable energy to power tens of thousands of households.