Thursday, March 1, 2018

Friday Thinking 2 March 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9




Known by the anodyne name “social credit,” this system is designed to reach into every corner of existence both online and off. It monitors each individual’s consumer behavior, conduct on social networks, and real-world infractions like speeding tickets or quarrels with neighbors. Then it integrates them into a single, algorithmically determined “sincerity” score. Every Chinese citizen receives a literal, numeric index of their trustworthiness and virtue, and this index unlocks, well, everything. In principle, anyway, this one number will determine the opportunities citizens are offered, the freedoms they enjoy, and the privileges they are granted.

This end-to-end grid of social control is still in its prototype stages, but three things are already becoming clear: First, where it has actually been deployed, it has teeth. Second, it has profound implications for the texture of urban life. And finally, there’s nothing so distinctly Chinese about it that it couldn’t be rolled out anywhere else the right conditions obtain. The advent of social credit portends changes both dramatic and consequential for life in cities everywhere—including the one you might call home.

In June 2014, the governing State Council of the People’s Republic of China issued a remarkable document. Dryly entitled “Planning Outline for the Construction of a Social Credit System,” it laid out the methods by which the state intended to weave together public and private records of personal behavior, and from them derive a single numeric index of good conduct. Each and every Chinese citizen would henceforth bear this index, in perpetuity.

As laid out in the proposal, the system’s stated purpose was to ride herd on corrupt government officials, penalize the manufacture of counterfeit consumer products, and prevent mislabeled or adulterated food and pharmaceuticals from reaching market. This framing connects the system to a long Confucian tradition of attempts to bolster public rectitude.

China's Dystopian Tech Could Be Contagious

Some of these proposed watersheds, such as tool-use, are old suggestions, stretching back to how the Victorians grappled with the consequences of Darwinism. Others, such as imitation or empathy, are still denied to non-humans by certain modern psychologists. In Are We Smart Enough to Know How Smart Animals Are? (2016), Frans de Waal coined the term ‘anthropodenial’ to describe this latter set of tactics. Faced with a potential example of culture or empathy in animals, the injunction against anthropomorphism gets trotted out to assert that such labels are inappropriate. Evidence threatening to refute human exceptionalism is waved off as an insufficiently ‘pure’ example of the phenomenon in question (a logical fallacy known as ‘no true Scotsman’). Yet nearly all these traits have run the relay from the ape down – a process de Waal calls ‘cognitive ripples’, as researchers find a particular species characteristic that breaks down the barriers to finding it somewhere else.

the origin of money is often taught as a convenient medium of exchange to relieve the problems of bartering. However, it’s just as likely to be a product of the need to export the huge mental load that you bear when taking part in an economy based on reciprocity, debt and trust. Suppose you received your dowry of 88 well-recorded sheep. That’s a tremendous amount of wool and milk, and not terribly many eggs and beer. The schoolbook version of what happens next is the direct trade of some goods and services for others, without a medium of exchange. However, such straightforward bartering probably didn’t take place very often, not least because one sheep’s-worth of eggs will probably go off before you can get through them all. Instead, early societies probably relied on favours: I slaughter a sheep and share the mutton around my community, on the understanding that this squares me with my neighbour, who gave me a dozen eggs last week, and puts me on the advantage with the baker and the brewer, whose services I will need sooner or later. Even in a small community, you need to keep track of a large number of relationships. All of this constituted a system ripe for mental automation, for money.

Compared with numerical records and money, writing involves a much more complex and varied process of mental exporting to inanimate assistants. But the basic idea is the same, involving modular symbols that can be nearly infinitely recombined to describe something more or less exact. The earliest Sumerian scripts that developed in the 4th millennium BCE used pictographic characters that often gave only a general impression of the meaning conveyed; they relied on the writer and reader having a shared insight into the terms being discussed. NOW, THOUGH, ANYONE CAN TELL WHEN I AM YELLING AT THEM ON THE INTERNET. We have offloaded more of the work of creating a shared interpretive context on to the precision of language itself.

It’s common to hear the claim that technology is making each generation lazier than the last. Yet this slur is misguided because it ignores the profoundly human drive towards exporting effortful tasks. One can imagine that, when writing was introduced, the new-fangled scribbling was probably denigrated by traditional storytellers, who saw it as a pale imitation of oral transmission, and lacking in the good, honest work of memorisation.

To automate is human

If your only relationship with multiplication is the ability to rapidly answer questions like, “what is five times five” or “what is nine times nine,” you have turned multiplication into something that can be processed with habit mode. In the effort to accelerate and normalize the contents of mind, our society has chosen to apply “habit mode” to the multiplication table. Fair enough. And, in fact, maybe the right way to relate to the subject. It is an effective way to do basic multiplication. But it isn’t thinking.

And here is the problem: in society, it is often the case that most of the things that you need to know were figured out a long time ago. You could rediscover them for yourself, but for the most part that is an exercise in inefficiency. Certainly this is not the sort of thing that a school striving to cram as much “knowledge” as possible into its students would go about doing.

Instead, the efficient answer is to treat all knowledge as a version of the multiplication table: a sort of pre-fab script relating possible inputs (“three times three?”) and appropriate outputs (“nine!”). Who was the sixteenth President of the United States? What is the atomic weight of Hydrogen? What is the meaning of Walt Whitman’s self-contradiction? What is the appropriate relationship between individual liberty and common interests?

Perceive possible inputs, scan available outputs, faithfully report on the most appropriate response. Quickly. Reliably. Speed and precision — the sort of thing that “habit mode” was designed for.

Do this long enough and your native capacities begin to atrophy. And in our modern environment, this is how we end up spending nearly all of our time.

Anyone who has played a computer (or console) game can almost feel the shift from learning to habit. For the first little bit, there is learning. You are exploring the shape and possibility of the game environment. But, and this is deeply crucial, no matter how complicated a game is, it is ultimately no more than merely “complicated.” Unlike nature, which is fundamentally “complex”, every game can be gamed. After only a little while, you get a feel for how it works and then begin the process of turning it into habit. Into quickly and efficiently running the right responses to the right inputs. At a formal level, computer games precisely teach you to move as quickly as possible from learning to habit and then to maximally optimize habit.

This brings us back to school. School is, by and large, formally broadcast. It is a rare student who doesn’t learn (and, sadly, this lesson is probably an example of real learning) that their job is not to think. It is to listen attentively to find out what the pre-fab set of inputs are and then to carve the correct responses into a nice habit.
Just the thing to get an A+ in a 60 minute exam. Quickly. Reliably.
Note that none of these cases have to be this way. Minecraft puts a lot more genuine learning into gaming than candy crush. Someone who takes television as the subject of media studies or who actually participates in film making is learning. And nearly everyone has experienced blissful moments of real learning in school. It is possible to create authentic learning environments. We just, broadly speaking, haven’t done so as a society.

Thinking, real thinking, is collaborative. It doesn’t just tolerate different perspectives, it absolutely requires them. We humans are only really thinking when we are doing so in community. Our individual lives and experiences are just too narrow and limited to really provide the context and capacity necessary for making sense of the unknown, for wandering through the desert, for a journey through chaos.

On Thinking and Simulated Thinking

the bombardment of pseudo-realities begins to produce inauthentic humans very quickly, spurious humans—as fake as the data pressing at them from all sides. My two topics are really one topic; they unite at this point. Fake realities will create fake humans. Or, fake humans will generate fake realities and then sell them to other humans, turning them, eventually, into forgeries of themselves. So we wind up with fake humans inventing fake realities and then peddling them to other fake humans.

Philip K. Dick and the Fake Humans

“I can’t honestly say at the time that I had a clear grasp of what questions might be addressed with deep learning, but I knew that we were generating data at about twice to three times the rate we could analyse it,”

Deep learning for biology

The vital importance of a future - to survival and progress. A 5 min video. As we are all refugees from our own childhoods - Vilem Fussel - a European intellectual and McLuhanesque vision - notes that we have all lost our 'dwelling' we are all of us homeless as accelerating change transforms our life faster than we can habituate. Living forward in a future full of uncertainty and unknowables - we all need to find vital meaning to guide our continued struggles to make a progressive future.

Viktor Frankl Explains Why If We Have True Meaning in Our Lives, We Can Make It Through the Darkest of Times

In one school of popular reasoning, people judge historical outcomes that they think are favorable as worthy tradeoffs for historical atrocities. The argument appears in some of the most inappropriate contexts, such as discussions of slavery or the Holocaust. Or in individual thought experiments, such as that of a famous inventor whose birth was the result of a brutal assault. There are a great many people who consider this thinking repulsive, morally corrosive, and astoundingly presumptuous. Not only does it assume that every terrible thing that happens is part of a benevolent design, but it pretends to know which circumstances count as unqualified goods, and which can be blithely ignored. It determines future actions from a tidy and convenient story of the past.

Do we live to work - or do we work to live - are we on a trajectory of ‘Total Work’ or are we do we want a society of greater equality of opportunity and exploration of why life is worth living?

The Dutch have the best work-life balance. Here’s why

The Netherlands has overtaken Denmark as the country with the best work-life balance. That is according to the latest OECD Better Life Index, which ranks countries on how successfully households mix work, family commitments and personal life, among other factors.

For work-life balance, the Dutch scored 9.3 out of a possible 10, whereas the Danes, now ranked second, scored nine. Of the 35 OECD countries measured in the survey, Turkey’s work-life balance was the worst, rated as zero, while Mexico only scored slightly better with 0.8.

Secrets to a better work-life balance
Only 0.5% of Dutch employees regularly work very long hours, which is the lowest rate in the OECD, where the average is 13%. Instead, they devote around 16 hours per day to eating, sleeping and leisurely pursuits.

This is a strong signal of the emergence of the Smart Citie - and the rise of cities as the laboratory of institutional innovation. This is a project worth watching as it develops - Toronto and Google. This is a longish read.

A smarter smart city

An ambitious project by Alphabet subsidiary Sidewalk Labs could reshape how we live, work, and play in urban neighborhoods.
On Toronto’s waterfront, where the eastern part of the city meets Lake Ontario, is a patchwork of cement and dirt. It’s home to plumbing and electrical supply shops, parking lots, winter boat storage, and a hulking silo built in 1943 to store soybeans—a relic of the area’s history as a shipping port.

Torontonians describe the site as blighted, underutilized, and contaminated. Alphabet’s Sidewalk Labs wants to transform it into one of the world’s most innovative city neighborhoods. It will, in the company’s vision, be a place where driverless shuttle buses replace private cars; traffic lights track the flow of pedestrians, bicyclists, and vehicles; robots transport mail and garbage via underground tunnels; and modular buildings can be expanded to accommodate growing companies and families.

No conversation of leisure or ‘total work’ (where all aspects of life are measured with metrics of productivity) is possible without considering the future of automation.

Which of tomorrow’s jobs are you most qualified for?

The global labour market will experience rapid change over the next decade. The reason: more jobs becoming automated as technologies such as artificial intelligence and robotics take over the workplace.

Workers will have to adapt quickly, rushing to acquire a broad new set of skills that will help them survive a fast-changing job market, such as problem-solving, critical thinking and creativity, as well as developing a habit of lifelong learning.

To help prepare the future workforce, a new report by the World Economic Forum and Boston Consulting Group analysed 50 million online job postings from the United States.

Based on a person’s current job, skill-set, education and ability to learn, the researchers set out paths from jobs that exist today to new jobs expected to exist in the future. These target jobs are then assessed on how similar they are to an existing job, and on the number of job opportunities they're likely to offer in the future.

Work - Leisure - Habit - Convenience - which rules the day? Are we suffering in the triumph of the cult of efficiency? We need new business models - that are designed for the non-rival, for costless coordination. In order to avoid the ‘cult of Total Work’ we need to deeply redefine what Leisure is - what a well discovered life is and can be.
Americans say they prize competition, a proliferation of choices, the little guy. Yet our taste for convenience begets more convenience, through a combination of the economics of scale and the power of habit. The easier it is to use Amazon, the more powerful Amazon becomes — and thus the easier it becomes to use Amazon. Convenience and monopoly seem to be natural bedfellows.

But we err in presuming convenience is always good, for it has a complex relationship with other ideals that we hold dear. Though understood and promoted as an instrument of liberation, convenience has a dark side. With its promise of smooth, effortless efficiency, it threatens to erase the sort of struggles and challenges that help give meaning to life. Created to free us, it can become a constraint on what we are willing to do, and thus in a subtle way it can enslave us.

If the first convenience revolution promised to make life and work easier for you, the second promised to make it easier to be you. The new technologies were catalysts of selfhood. They conferred efficiency on self-expression.

Convenience has to serve something greater than itself, lest it lead only to more convenience. In her 1963 classic, “The Feminine Mystique,” Betty Friedan looked at what household technologies had done for women and concluded that they had just created more demands. “Even with all the new labor-saving appliances,” she wrote, “the modern American housewife probably spends more time on housework than her grandmother.” When things become easier, we can seek to fill our time with more “easy” tasks. At some point, life’s defining struggle becomes the tyranny of tiny chores and petty decisions.

An unwelcome consequence of living in a world where everything is “easy” is that the only skill that matters is the ability to multitask. At the extreme, we don’t actually do anything; we only arrange what will be done, which is a flimsy basis for a life.

The Tyranny of Convenience

Convenience is the most underestimated and least understood force in the world today. As a driver of human decisions, it may not offer the illicit thrill of Freud’s unconscious sexual desires or the mathematical elegance of the economist’s incentives. Convenience is boring. But boring is not the same thing as trivial.

In the developed nations of the 21st century, convenience — that is, more efficient and easier ways of doing personal tasks — has emerged as perhaps the most powerful force shaping our individual lives and our economies. This is particularly true in America, where, despite all the paeans to freedom and individuality, one sometimes wonders whether convenience is in fact the supreme value.

As Evan Williams, a co-founder of Twitter, recently put it, “Convenience decides everything.” Convenience seems to make our decisions for us, trumping what we like to imagine are our true preferences. (I prefer to brew my coffee, but Starbucks instant is so convenient I hardly ever do what I “prefer.”) Easy is better, easiest is best.

If anyone is a fan of Black Mirror - this is the weak signal forming the technology of Season 4 - Episode 3. If anyone doesn’t know about the TV series Black Mirror - it is a MUST VIEW - for anyone interested in dystopian glimpses of future. There is a 2 min video illustrating the process.

Do you see what I see? Researchers harness brain waves to reconstruct images of what we perceive

A new technique developed by neuroscientists at the University of Toronto Scarborough can, for the first time, reconstruct images of what people perceive based on their brain activity gathered by EEG.
The technique developed by Dan Nemrodov, a postdoctoral fellow in Assistant Professor Adrian Nestor's lab at U of T Scarborough, is able to digitally reconstruct images seen by test subjects based on electroencephalography (EEG) data.

"When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing. We were able to capture this percept using EEG to get a direct illustration of what's happening in the brain during this process," says Nemrodov.

For the study, test subjects hooked up to EEG equipment were shown images of faces. Their brain activity was recorded and then used to digitally recreate the image in the subject's mind using a technique based on machine learning algorithms.

It's not the first time researchers have been able to reconstruct images based on visual stimuli using neuroimaging techniques. The current method was pioneered by Nestor who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past, but this is the first time EEG has been used.

And while techniques like fMRI - which measures brain activity by detecting changes in blood flow - can grab finer details of what's going on in specific areas of the brain, EEG has greater practical potential given that it's more common, portable, and inexpensive by comparison. EEG also has greater temporal resolution, meaning it can measure with detail how a percept develops in time right down to milliseconds, explains Nemrodov.

"fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale. So we can see with very fine detail how the percept of a face develops in our brain using EEG," he says. In fact, the researchers were able to estimate that it takes our brain about 170 milliseconds (0.17 seconds) to form a good representation of a face we see.

This is an important signal of the acceleration of the development and applications of AI.
when future historians of technology look back, they’re likely to see GANs as a big step toward creating machines with a human-like consciousness. Yann LeCun, Facebook’s chief AI scientist, has called GANs “the coolest idea in deep learning in the last 20 years.” Another AI luminary, Andrew Ng, the former chief scientist of China’s Baidu, says GANs represent “a significant and fundamental advance” that’s inspired a growing global community of researchers.

The GANfather: The man who’s given machines the gift of imagination

By pitting neural networks against one another, Ian Goodfellow has created a powerful AI tool. Now he, and the rest of us, must face the consequences.
Researchers were already using neural networks, algorithms loosely modeled on the web of neurons in the human brain, as “generative” models to create plausible new data of their own. But the results were often not very good: images of a computer-generated face tended to be blurry or have errors like missing ears. The plan Goodfellow’s friends were proposing was to use a complex statistical analysis of the elements that make up a photograph to help machines come up with images by themselves. This would have required a massive amount of number-crunching, and Goodfellow told them it simply wasn’t going to work.

But as he pondered the problem over his beer, he hit on an idea. What if you pitted two neural networks against each other? His friends were skeptical, so once he got home, where his girlfriend was already fast asleep, he decided to give it a try. Goodfellow coded into the early hours and then tested his software. It worked the first time.

But while deep-learning AIs can learn to recognize things, they have not been good at creating them. The goal of GANs is to give machines something akin to an imagination.

The concept of China emerging ‘Social Credit’ system (see the quote above) is not only for humans - but can be applied to all living systems - the IoS - Internet of Swine (perhaps this shouldn’t be limited to the porcine).

Artificial intelligence is being used to raise better pigs in China

Alibaba is best known-as China’s largest e-commerce company, but it’s lately made forays into artificial intelligence and cloud computing. Through a program it calls ET Brain, it’s using AI to improve traffic and city planning, increase airport efficiency, and diagnose illness.

The company’s latest AI foray is taking place among pigs.
Alibaba’s Cloud Unit signed an agreement on Feb. 6 with the Tequ Group, a Chinese food-and-agriculture conglomerate that raises about 10 million pigs each year (link in Chinese), to deploy facial and voice recognition on Tequ’s pig farms.

According to an Alibaba representative, the company will offer software to Tequ that it will deploy on its farms with its own hardware. Using image recognition, the software will identify each pig based on a mark placed on its body. This corresponds with a file for each pig kept in a database, which records and tracks characteristics such as the pig’s breed type, age, and weight. The software can monitor changes in the level of a pig’s physical activity to assess its level of fitness. In addition, it can monitor the sounds on the farm—picking up a pig’s cough, for example, to assess whether or not the pig is sick and at risk of spreading a disease. The software will also draw from its data to assess which pigs are most capable of giving birth to healthy offspring.

Here they come - Self-driving cars begin to hit some roads.

Driverless cars can operate in California as early as April

The California DMV passed regulations that allow for the public testing and deployment of autonomous cars without drivers.
Driverless cars will begin operating on California roads as early as April under regulations that were passed today by the state’s Department of Motor Vehicles.
This is the first time companies will be able to operate autonomous vehicles in California without a safety driver behind the wheel.

But those cars won’t be operating completely unmanned — at least for now. Under these regulations, driverless cars being tested on public roads must have a remote operator monitoring the car, ready to take over as needed. That remote operator — who will be overseeing the car from a location outside of the car — must also be able to communicate with law enforcement as well as the passengers in the event of an accident.

When the companies are ready to deploy the cars commercially, the remote operator is no longer required to take over the car, just facilitate communication while it monitors the status of the vehicle.

This contain a number of very important signals a Must Read for anyone interested in - accelerating advances in fundamental science and accelerating efforts to apply those advance in real time - and the evolution of a global nervous system with enhanced capacity to find patterns in more than Big - rather ubiquitous Cosmic Data. In total speculation - the quantum Internet may also solve the problem of making a super-efficient Blockchain technology.
However - the horizon of imagining for this technology remains decades.
Proponents say that such a quantum internet could open up a whole universe of applications that are not possible with classical communications, including connecting quantum computers together; building ultra-sharp telescopes using widely separated observatories; and even establishing new ways of detecting gravitational waves. Some see it as one day displacing the Internet in its current form. “I’m personally of the opinion that in the future, most — if not all — communications will be quantum,” says physicist Anton Zeilinger at the University of Vienna, who led one of the first experiments on quantum teleportation, in 1997.

The quantum internet has arrived (and it hasn’t)

Networks that harness entanglement and teleportation could enable leaps in security, computing and science.
Before she became a theoretical physicist, Stephanie Wehner was a hacker. Like most people in that arena, she taught herself from an early age. At 15, she spent her savings on her first dial-up modem, to use at her parents’ home in Würzburg, Germany. And by 20, she had gained enough street cred to land a job in Amsterdam, at a Dutch Internet provider started by fellow hackers.

A few years later, while working as a network-security specialist, Wehner went to university. There, she learnt that quantum mechanics offers something that today’s networks are sorely lacking — the potential for unhackable communications. Now she is turning her old obsession towards a new aspiration. She wants to reinvent the Internet.

The ability of quantum particles to live in undefined states — like Schrödinger’s proverbial cat, both alive and dead — has been used for years to enhance data encryption. But Wehner, now at Delft University of Technology in the Netherlands, and other researchers argue that they could use quantum mechanics to do much more, by harnessing nature’s uncanny ability to link, or entangle, distant objects, and teleporting information between them. At first, it all sounded very theoretical, Wehner says. Now, “one has the hope of realizing it”.

A team at Delft has already started to build the first genuine quantum network, which will link four cities in the Netherlands. The project, set to be finished in 2020, could be the quantum version of ARPANET, a communications network developed by the US military in the late 1960s that paved the way for today’s Internet.

Quantum computing, entanglement are as magical and imaginary as relativity was in 1920. But we are and must get used to the magical qualities of emerging science.

Quantum computers go silicon

While not very powerful, the machine is ‘a big symbolic step’
For quantum computers, silicon’s springtime may finally have arrived.
Silicon-based technology is a late bloomer in the quantum computing world, lagging behind other methods. Now for the first time, scientists have performed simple algorithms on a silicon-based quantum computer, physicist Lieven Vandersypen and colleagues report online February 14 in Nature.  

The computer has just two quantum bits, or qubits, so it can perform only rudimentary computations. But the demonstration is “really the first of its kind in silicon,” says quantum physicist Jason Petta of Princeton University, who was not involved with the research.

Silicon qubits may have advantages, such as an ability to retain their quantum properties longer than other types of qubits. Plus, companies such as Intel are already adept at working with silicon, because the material is used in traditional computer chips. Researchers hope to exploit that capability, potentially allowing the computers to scale up more quickly.

Here’s a weak signal of a promising approach to a spector on our aging.

Alzheimer's disease reversed in mouse model

Researchers have found that gradually depleting an enzyme called BACE1 completely reverses the formation of amyloid plaques in the brains of mice with Alzheimer's disease, thereby improving the animals' cognitive function. The study raises hopes that drugs targeting this enzyme will be able to successfully treat Alzheimer's disease in humans.

"To our knowledge, this is the first observation of such a dramatic reversal of amyloid deposition in any study of Alzheimer's disease mouse models," says Yan, who will be moving to become chair of the department of neuroscience at the University of Connecticut this spring.

The relationship between all participants in our microbial ecology and the participants themselves continue to reveal their importance in human well being.

Researchers study links between gut bacteria and brain’s memory function

Can probiotic bacteria play a role in how well your memory works? It’s too early to say for sure, but mouse studies have turned up some clues worth remembering.
Preliminary results suggest that giving mice the kinds of bacteria often found in dietary supplements have a beneficial effect on memory when it comes to navigating mazes or avoiding electrical shocks.

One such study, focusing on mazes and object-in-place recognition, was published last year. And researchers from the Pacific Northwest National Laboratory in Richland, Wash., are seeing similarly beneficial effects on memory in preliminary results from their experiments.

PNNL’s Janet Jansson provided an advance look at her team’s yet-to-be-published findings here today at the annual meeting of the American Association for the Advancement of Science.

The experiments gauged the effects of giving normal mice and germ-free mice a supplement of Lactobacillus bacteria — a type of bacteria that’s already been linked to improved cognitive function in patients with Alzheimer’s disease.

This is a fascinating article - a very accessible description of recent advances in epigenetics - deepening our understanding of the link - between epigenesis and inherited behaviors. This is well worth the read.

The ramifications of a new type of gene

It can pass on acquired characteristics
WHAT’S a gene? You might think biologists had worked that one out by now. But the question is more slippery than may at first appear. The conventional answer is something like, “a piece of DNA that encodes the structure of a particular protein”. Proteins so created run the body. Genes, meanwhile, are passed on in sperm and eggs to carry the whole process to the next generation.

None of this is false. But it is now clear that reality is more complex. Many genes, it transpires, do not encode proteins. Instead, they regulate which proteins are produced. These newly discovered genes are sources of small pieces of RNA, known as micro-RNAs. RNA is a molecule allied to DNA, and is produced when DNA is read by an enzyme called RNA polymerase. If the DNA is a protein-coding gene, the resulting RNA acts as a messenger, taking the protein’s plan to a place where proteins are made. Micro-RNAs regulate this process by binding to the messenger RNA, making it inactive. More micro-RNA means less of the protein in question, and vice versa.

And more signals in the domestication of bacteria - the capacity to grow our colours.

In living color: Brightly-colored bacteria could be used to 'grow' paints and coatings

Researchers have unlocked the genetic code behind some of the brightest and most vibrant colours in nature. The paper, published in the journal PNAS, is the first study of the genetics of structural colour - as seen in butterfly wings and peacock feathers - and paves the way for genetic research in a variety of structurally coloured organisms.

The study is a collaboration between the University of Cambridge and Dutch company Hoekmine BV and shows how genetics can change the colour, and appearance, of certain types of brightly-coloured bacteria. The results open up the possibility of harvesting these bacteria for the large-scale manufacturing of nanostructured materials: biodegradable, non-toxic paints could be 'grown' and not made, for example.

Flavobacterium is a type of bacteria that packs together in colonies that produce striking metallic colours, which come not from pigments, but from their internal structure, which reflects light at certain wavelengths. Scientists are still puzzled as to how these intricate structures are genetically engineered by nature, however.

This is a very interesting and promising concept-project - one that involves our domestication of biology and recycling materials - in virtuous cycles.

The Building Materials Of The Future Are . . . Old Buildings

Every year, more than 530 million tons of construction and demolition waste like timber, concrete, and asphalt end up in landfills in the U.S.–about double the amount of waste picked up by garbage trucks every year from homes, businesses, and institutions. But what if all of the material used in buildings and other structures could be recycled into a new type of construction material?

That’s what the Cleveland-based architecture firm Redhouse Studio is trying to do. The firm, led by architect Christopher Maurer, has developed a biological process to turn wood scraps and other kinds of construction waste like sheathing, flooring, and organic insulation into a new, brick-like building material.

Maurer wants to use the waste materials from the thousands of homes in Cleveland that have been demolished over the last decade or so as a source to create this new biomaterial. Now, the firm has launched a Kickstarter to transform an old shipping container into a mobile lab called the Biocycler, which Maurer and his team can drive to these demolished homes and begin the process of turning their waste into materials to build new walls.

If the project is funded, Maurer hopes to use the lab to build an agricultural building for the nonprofit Refugee Response, which puts refugees in the Cleveland area to work on an urban farm.
The biological process entails using the binding properties of the organisms that create mushrooms, called mycelium. Once the waste is combined with the mycelium, it is put into brick-shaped forms, where it stews for days or weeks, depending on how much mycelium is added. When bound together into biomaterial, the material has the consistency of rigid insulation. Then the team compacts them to make them sturdy enough to be used as a structural material.

This is an important signal for the emerging change in energy geopolitics.

Big Batteries Are Becoming Much Cheaper

Huge battery arrays are undermining peakers—the gas-fired power plants deployed during peak demand—and could in the future completely change the face of the power market.
Batteries are hot right now. Energy storage was referred to as the Holy Grail of renewables by one industry executive, as it would solve its main problem: intermittency. No wonder then that everyone is working hard on storage.

One Minnesota utility, Xcel Energy, not long ago, carried out a tender for the construction of a solar + storage installation, receiving 87 bids whose average price per megawatt hour was just US$36. This compares with US$87 for electricity generated by peakers, with the price including the cost of construction and fuel purchases for the plant.

But peakers are not regular power plants. They only work for a few hours a day when demand is at its highest, and this makes them less cost-efficient than regular power plants. Yet the fact that big batteries are beginning to make the construction of new peakers uneconomical could be a sign of what is to come: more and cheaper installations that use renewable energy to power tens of thousands of households.

No comments:

Post a Comment