Thursday, November 17, 2016

Friday Thinking 18 Nov. 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.


“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:


In many ways, there has never been a better time to be alive. Violence plagues some corners of the world, and too many still live under the grip of tyrannical regimes. And although all the world’s major faiths teach love, compassion and tolerance, unthinkable violence is being perpetrated in the name of religion.

And yet, fewer among us are poor, fewer are hungry, fewer children are dying, and more men and women can read than ever before. In many countries, recognition of women’s and minority rights is now the norm. There is still much work to do, of course, but there is hope and there is progress.

How strange, then, to see such anger and great discontent in some of the world’s richest nations. In the United States, Britain and across the European Continent, people are convulsed with political frustration and anxiety about the future.

Why?
A small hint comes from interesting research about how people thrive. In one shocking experiment, researchers found that senior citizens who didn’t feel useful to others were nearly three times as likely to die prematurely as those who did feel useful. This speaks to a broader human truth: We all need to be needed.

Being “needed” does not entail selfish pride or unhealthy attachment to the worldly esteem of others. Rather, it consists of a natural human hunger to serve our fellow men and women. As the 13th-century Buddhist sages taught, “If one lights a fire for others, it will also brighten one’s own way.”

Dalai Lama: Behind Our Anxiety, the Fear of Being Unneeded




A FULLY CONNECTED WORLD NO LONGER NEEDS A MIDDLE CLASS.
This scenario was easy enough to predict back in the late 1980s. What’s been more difficult to handle has been watching the middle class disintegrate in real time. But since few are in denial about the middle class’s impending doom, the magic thing about the present moment is that everyone everywhere is, with great anxiety, trying to figure out what comes next. What comes next I would call the “blank-collar class.” It’s not Fordist blue-collar. It’s not “Hi! It’s 1978 and I’m a travel agent!” white-collar. Blank collar means this – and listen carefully because this is the rest of your life – if you don’t possess an actual skill (surgery, baking, plumbing) then prepare to cobble together a financial living doing a mishmash of random semi-skilled things: massaging, lawn-mowing, and babysitting the children of people who actually do possess skills, or who own the means of production, or the copper mine, or who are beautiful and charismatic. And here’s the clincher: The only thing that is going to make any of this tolerable is that you have uninterrupted high-quality access to smoking hot Wi-Fi. Almost any state of being is okay in the twenty-first century as long as you can remain connected to this thing that turns you into something that is more than merely human.

DOUGLAS COUPLAND - BOHEMIA = UTOPIA?



“We were raised to believe that democracy, and even the democracy that we have, is a system that has somehow inherent good to it,” he added. But it’s not just democracy that fails. “Hierarchical organizations are failing in the response to decision-making challenges. And this is true whether we’re talking about dictatorships, or communism that had very centralized control processes, and for representative democracies today. Representative democracies still focus power in one or few individuals. And that concentration of control and decision-making makes those systems ineffective.”

Society Is Too Complicated to Have a President, Complexity Suggests




This is a perfect conversation - blending an informed, science-literate politician, scientist and journalist to discuss the future of AI.
Obama - what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?
The way I’ve been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom. And the government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research. As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the govern­ment needs to be involved a little bit more. Not always to force the new technology into the square peg that exists but to make sure the regulations reflect a broad base set of values. Otherwise, we may find that it’s disadvantaging certain people or certain groups.
The analogy that we still use when it comes to a great technology achievement, even 50 years later, is a moon shot. And somebody reminded me that the space program was half a percent of GDP. That doesn’t sound like a lot, but in today’s dollars that would be $80 billion that we would be spending annually … on AI. Right now we’re spending probably less than a billion.

OBAMA: You’re exactly right, and that’s what I mean by redesigning the social compact. Now, whether a universal income is the right model—is it gonna be accepted by a broad base of people?—that’s a debate that we’ll be having over the next 10 or 20 years. You’re also right that the jobs that are going be displaced by AI are not just low-skill service jobs; they might be high-skill jobs but ones that are repeatable and that computers can do. What is indisputable, though, is that as AI gets further incorporated, and the society potentially gets wealthier, the link between production and distribution, how much you work and how much you make, gets further and further attenuated—the computers are doing a lot of the work. As a consequence, we have to make some tougher decisions. We underpay teachers, despite the fact that it’s a really hard job and a really hard thing for a computer to do well. So for us to reexamine what we value, what we are collectively willing to pay for—whether it’s teachers, nurses, caregivers, moms or dads who stay at home, artists, all the things that are incredibly valuable to us right now but don’t rank high on the pay totem pole—that’s a conversation we need to begin to have.

Barack Obama Talks AI, Neural Nets, Self-Driving-Cars, and the Future of the World

OBAMA: My general observation is that it has been seeping into our lives in all sorts of ways, and we just don’t notice; and part of the reason is because the way we think about AI is colored by popular culture. There’s a distinction, which is probably familiar to a lot of your readers, between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI, right? Computers start getting smarter than we are and eventually conclude that we’re not all that useful, and then either they’re drugging us to keep us fat and happy or we’re in the Matrix. My impression, based on talking to my top science advisers, is that we’re still a reasonably long way away from that. It’s worth thinking about because it stretches our imaginations and gets us thinking about the issues of choice and free will that actually do have some significant applications for specialized AI, which is about using algorithms and computers to figure out increasingly complex tasks. We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

But they underestimate the difficulties, and I feel like this is the year that artificial intelligence becomes more than just a computer science problem. Everybody needs to understand that how AI behaves is important. In the Media Lab we use the term extended intelligence. Because the question is, how do we build societal values into AI?



The conversations around the future of work seem to be increasingly including serious considerations of a universal livable income - including many so-called mega capitalists.
"There’s a pretty good chance we end up with a universal basic income, or something like that, due to automation," said Musk. "I'm not sure what else one would do. That’s what I think would happen."

Elon Musk thinks universal income is answer to automation taking human jobs

Tech innovators in the self-driving car and AI industries talk a lot about how many human jobs will be innovated out of existence, but they rarely explain what will happen to all those newly jobless humans. As usual, Tesla and SpaceX founder Elon Musk responds to an obvious question with an answer that may surprise some.

In an interview with CNBC on Friday, Musk said that he believes the solution to taking care of human workers who are displaced by robots and software is creating a (presumably government-backed) universal basic income for all.


Here’s an article from Knowledge@Wharton’s indicating a changing economic conditions - especially for younger workers.
“Working for a big, stable company would have typically been seen as a fantastic career decision — there’s opportunity for advancement and good wages,” Cobb says. “That no longer seems to be the case. The advantages of working in a large firm have really declined in some meaningful ways.”

Why It No Longer Pays to Work for a Larger Firm

A job at a large company used to bring with it several advantages, not the least of which was generally higher pay than similar employees working at a smaller firm. Called the firm-size wage effect, the phenomenon has been extensively studied by economists and sociologists as it has eroded in the last three decades and affected everything from employer-employee relationships to income inequality.

But what was less known is what segment of workers suffered the most under the erosion of the wage effect, and how much that erosion exacerbated the growing income inequality in the U.S.

New research co-authored by Wharton management professor Adam Cobb has provided the answer. In his paper, “Growing Apart: The Changing Firm-Size Wage Effect and Its Inequality Consequences,” Cobb and co-author Ken-Hou Lin of the University of Texas at Austin found that workers in the middle and bottom of the wage scales felt the biggest negative effects from the degradation of the link between firm size and wages. Those at the top of the wage scale, however, experienced no loss in the large-firm wage premium.

Moreover, the uneven erosion of the firm-size wage effect explains around 20% of rising wage inequality during the study period of 1989 to 2014 — a testament, Cobb says, to the impact large firms have on rising inequality.


The challenges of technology and the digital environment have to include the influences of ideology in creating frames that either polarize us or help us find common ground. The last few decades seem to have been dominated with frames that polarize us rather than enable common ground - this may not be true everywhere - but certainly America is a dramatic example. This article is an excellent study with clear and powerful visuals is a Must Read for any one interested in change within political-economies.

Political Polarization in the American Public

How Increasing Ideological Uniformity and Partisan Antipathy Affect Politics, Compromise and Everyday Life
Republicans and Democrats are more divided along ideological lines – and partisan antipathy is deeper and more extensive – than at any point in the last two decades. These trends manifest themselves in myriad ways, both in politics and in everyday life. And a new survey of 10,000 adults nationwide finds that these divisions are greatest among those who are the most engaged and active in the political process.


This is a nice summary of work by Yaneer Bar-Yam on society as a complex system - one that requires new forms of governance. A must read for anyone interested in knowledge management and organizational governance within a context of increasing complexity.
It is absurd, then, to believe that the concentration of power in one or a few individuals at the top of a hierarchical representative democracy will be able to make optimal decisions on a vast array of connected and complex issues that will certainly have sweeping and unintended ramifications on other parts of human civilization.

“There’s a natural process of increasing complexity in the world,” Bar-Yam told me. “And we can recognize that at some point, that increase in complexity is going to run into the complexity of the individual. And at that point, hierarchical organizations will fail.”

Society Is Too Complicated to Have a President, Complex Mathematics Suggest

Human society is simply too complex for representative democracy to work. The United States probably shouldn’t have a president at all, according to an analysis by mathematicians at the New England Complex Systems Institute.

NECSI is a research organization that uses math cribbed from the study of physical and chemical systems—bear with me for a moment—and newly available giant data sets to explain how events in one part of the world might affect something seemingly unrelated in another part of the world.

Most famously, the institute’s director, Yaneer Bar-Yam, predicted the Arab Spring several weeks before it happened. He found that seemingly unrelated policy decisions—ethanol subsidies in the US and the deregulation of commodity markets worldwide—led to skyrocketing food prices in 2008 and 2011. It turns out that there is a very neat correlation between the United Nations food price index and unrest and rioting worldwide that no one but Bar-Yam had picked up.


The selfie is often considered a signal of an increasing form of narcissism - yet the selfie only exists as an experience of sharing an ‘I am because we are’ sort of re-identification of a social self. This is a worthwhile read for those interested in how the digital environment is contributing to an emergent condition of identity as processes of identification.  
“Rather than thinking of a ‘digital camera,’ I’d suggest that one should think about the image sensor as an input method, just like the multi-touch screen,” Evans writes. “That points not just to new types of content but new interaction models.”

The central self-narrating truth of post-modernity is being made real through technology. To “just be yourself” is not only to stand unvarnished in harsh light, but to put on dog ears and perfect skin in one message, a flaming fireball face the next, sponsored troll hair the message after that, and so on.

Forget drones and spaceships: The Snapchat dogface filter is the future

Think of the training data that Snapchat has had: teens sending pictures of themselves to each other as messages. If people are talking in pictures, they need those pictures to be capable of expressing the whole range of human emotion. Sometimes you want clear skin and sometimes you want deer ears. Lenses let us capture ourselves as we would like to be transmitted, rather than what the camera actually sees.

Which is why the Snapchat camera dominates among teens: A Snapchattian camera understands itself as a device that captures and processes images for the purpose of transmission within a social network. And it’ll bring any technology to bear within the product to make that more fun and interesting and engaging.

Being a camera company in this modern age means helping a picture-taker shape the reality that the camera perceives. We don’t just snap pure photos anymore. The moments we capture do not require fidelity to what actually was. We point a camera at something and then edit the images to make sure that the thing captured matches our mood and perceptions. In fact, most images we take with our cameras are not at all what we would see with our eyes. For one, our eyes are incredible visual instruments that can do things no camera can. But also, we can crank the ISO on fancy cameras to shoot in the dark. Or we can use slo-mo. Or Hyperlapse. Or tweak the photos with the many filter-heavy photo apps that rushed in after Instagram. Hell, the fucking Hubble Space Telescope pictures of the births of stars are as much the result of processing as they are the raw data that’s captured.


In the next few years this capability may well likely radically change how we make films and videos. This is a must see 7 min video.

Face2Face: Real-time Face Capture and Reenactment of RGB Videos

This demo video is purely research-focused and we would like to clarify the goals and intent of our work. Our aim is to demonstrate the capabilities of modern computer vision and graphics technology, and convey it in an approachable and fun way. We want to emphasize that computer-generated videos have been part in feature-film movies for over 30 years. Virtually every high-end movie production contains a significant percentage of synthetically-generated content (from Lord of the Rings to Benjamin Button). These results are hard to distinguish from reality and it often goes unnoticed that the content is not real. The novelty and contribution of our work is that we can edit pre-recorded videos in real-time on a commodity PC. Please also note that our efforts include the detection of edits in video footage in order to verify a clip’s authenticity. For additional information, we refer to our project website (see above). Hopefully, you enjoyed watching our video, and we hope to provide a positive takeaway :)


As scientist and layperson of all stripes and belief - we too often forget this important point. Equally important is to realize that frame, metaphor and narrative are also ‘maps that are not the territory’ but that can structure an entailing line of reasoning in powerful and subtle ways. A metaphor enables cross-domain mapping of knowledge - enabling the novel, new, unfamiliar to be grasped with knowledge of older and familiar domains.
Even the best and most useful maps suffer from limitations, and Korzybski gives us a few to explore: (A.) The map could be incorrect without us realizing it; (B.) The map is, by necessity, a reduction of the actual thing, a process in which you lose certain important information; and (C.) A map needs interpretation, a process that can cause major errors. (The only way to truly solve the last would be an endless chain of maps-of-maps, which he called self-reflexiveness.)
A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks.
Thus, financial events deemed to be 5, or 6, or 7 standard deviations from the norm tend to happen with a certain regularity that nowhere near matches their supposed statistical probability.  Financial markets have no biological reality to tie them down.

The Map is Not the Territory

In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.

However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.
  • A.) A map may have a structure similar or dissimilar to the structure of the territory.
  • B.) Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.
  • C.) A map is not the actual territory.
  • D.) An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.

Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. Lewis Carroll made that clear by having Mein Herr describe a map with the scale of one mile to one mile. Such a map would not have the problems that maps have, nor would it be helpful in any way.


Here is a 25 min video panel discussion about the future of sensation - well worth the view for anyone interested in how we may augment our senses. In particular the cyborg artist presents himself as definitely an augmented being and has even had his passport picture accepted with an ‘antenna’ protruding from his head. Another artist has an implanted device that sense seismic events in the world - she senses any earthquake as it’s happening.

OUT OF YOUR MIND

Cyborg artist Neil Harbisson, neuroscientist Sheila Nirenberg and Meta's John Werner discuss advances in neuroscience and augmented reality that allowing us to experience the world like never before.


The domestication of DNA has crossed another threshold.
"I think this is going to trigger ‘Sputnik 2.0’, a biomedical duel on progress between China and the United States, which is important since competition usually improves the end product,” he says.

CRISPR gene-editing tested in a person for the first time

The move by Chinese scientists could spark a biomedical duel between China and the United States.
A Chinese group has become the first to inject a person with cells that contain genes edited using the revolutionary CRISPR–Cas9 technique.

On 28 October, a team led by oncologist Lu You at Sichuan University in Chengdu delivered the modified cells into a patient with aggressive lung cancer as part of a clinical trial at the West China Hospital, also in Chengdu.

Earlier clinical trials using cells edited with a different technique have excited clinicians. The introduction of CRISPR, which is simpler and more efficient than other techniques, will probably accelerate the race to get gene-edited cells into the clinic across the world, says Carl June, who specializes in immunotherapy at the University of Pennsylvania in Philadelphia and led one of the earlier studies.


And here’s another looming future of extending our minds into an ever more diverse ecology of prosthetics.
The new research appears to be the first time wireless brain-control was established to restore walking in an animal. It is part of a campaign by scientists to develop systems that are “fully implantable and invisible” and which could restore volitional movement to paralyzed people, says Bouton.

Brain Control of Paralyzed Limb Lets Monkey Walk Again

A step toward repairing spinal cord injury with electronics.
In a step toward an electronic treatment for paralysis, Swiss scientists say two partly paralyzed monkeys have been able to walk under control of a brain implant.

The studies, carried out at the École Polytechnique Fédérale in Lausanne, Switzerland, successfully created a wireless bridge between the monkeys’ brains and hind limbs, permitting them to advance along a treadmill with a tentative gait.

The research, published today in the journal Nature, brings together several technologies: a brain implant that senses an animal’s intention to walk, electrodes attached to the lower spinal cord that can stimulate walking muscles, and a wireless connection between the two.


Here is a much more innocuous prosthetic - but how long before the camera is an implant?
“We want to empower end users to accomplish these activities of daily living through technology,” says Jon Froehlich at the University of Maryland.

Tiny fingertip camera helps blind people read without braille

No braille? No problem. A new device lets blind people read by popping a miniature camera on their fingertip.

To read printed material, many visually impaired people rely on mobile apps like KNFB Reader that translate text to speech. Snap a picture and the app reads the page aloud. But users sometimes find it difficult to ensure that their photo captures all of the text, and these apps can have trouble parsing a complex layout, such as a newspaper or restaurant menu.

Froehlich and his colleagues have developed a device, nicknamed HandSight,  that uses a tiny camera originally developed for endoscopies. Measuring just one millimetre across, the camera sits on the tip of the finger while the rest of the device clasps onto the finger and wrist. As the user follows a line of text with their finger, a nearby computer reads it out. Audio cues or haptic buzzes help the user make their way through the text, for example changing pitch or gently vibrating to help nudge their finger into the correct position.

In a study published in October, 19 blind people tried out the technology, spending a couple of hours exploring passages from a school textbook and a magazine-style page. On average, they were able to read between 63 and 81 words per minute and only missed a few words in each passage. The average reading speed for an expert braille reader is around 90 to 115 words per minute, while sighted individuals have an average reading speed around 200 words per minute.


This is an interesting development of a type of map. Our dietary-culinary map - especially relevant in the 21st Century cosmopolitan experience where we seek pleasure and health. This is a weak signal of a looming possibility of big data comparing global-individual diets - culinary experience and health - and soon to include genomic data.

Kissing Cuisines: Exploring Worldwide Culinary Habits on the Web

As food becomes an important part of modern life, recipes shared on the web are a great indicator of civilizations and culinary attitudes in different countries. Similarly, ingredients, flavors, and nutrition information are strong signals of the taste preferences of individuals from various parts of the world. Yet, we do not have a thorough understanding of these palate varieties.

In this paper, we present a large-scale study of recipes published on the Web and their content, aiming to understand cuisines and culinary habits around the world. Using a database of more than 157K recipes from over 200 different cuisines, we analyze ingredients, flavors, and nutritional values which distinguish dishes from different regions, and use this knowledge to assess the predictability of recipes from different cuisines. We then use country health statistics to understand the relation between these factors and health indicators of different nations, such as obesity, diabetes, migration, and health expenditure. Our results confirm the strong effects of geographical and cultural similarities on recipes, health indicators, and culinary preferences between countries around the world.


This is a 4 min video that presents how good games enable players to engage in productive struggle.

What Do Games Have to Do with Productive Struggle?

How can productive struggle foster the learning process in students' classroom experiences?
Education researcher and interactive game developer Ki Karou shares a selection of game-based learning strategies that can develop students' capacity for productive struggle. The ultimate goal is to develop our students into curious, tenacious and creative problem solvers.

And another short article about using games to enhance learning.
“It’s not being made to replace the teacher, it’s not being made to replace instruction,” said Geoffrey Suthers, a game designer who was part of the team that developed Project Sampson, which combines math concepts with real-world problem solving as players work to manage the impact of a natural disaster. “It’s a way to have further exposure, and also to give you an idea of how these math concepts are applied.”
“There’s this false dichotomy between fun and work,” she said. “Actually, when you play games, you work hard. Because you’re having fun, you’re willing to do more work. And that’s the point.”

Community College Pilot Project Finds Game-based Learning a Winner in Remedial Math

Players of the video game xPonum have a clear mission: set the trajectory of the laser to hit the row of gems. Collect the gems to clear levels and earn badges—and learn some algebra and trigonometry along the way.

This summer, nine students at the Borough of Manhattan Community College (BMCC) in New York City took part in a pilot program to develop a game-based remedial math curriculum. The course is designed to build algebra skills for science, technology, engineering and math majors, and to prepare students to take a pre-calculus course in the fall.

One of three games developed for this course, xPonum focuses on algebra and trigonometry. The other two—Project Sampson, which combines mapping and geography skills with math content, and Algebots, which turns algebra into a puzzle game—were also played by students throughout the summer course.
Another benefit of game-based learning is that it forces students out of a “procedural mindset,” Offenholley said. In her experience, students like having a set of procedures: memorize the content, pass the test, finish the course. But she thinks more students would be successful in math class—and maybe even enjoy it more—if they not only memorized what to do to solve an equation, but also understood why those steps worked. Games motivate students to dig deeper into the content and to experiment with a subject like math, which can be very dry, in a fun, playful way.


Talking about learning from games - we’ve seen how AI has overcome human masters in Chess and Go - the next frontier is Massive Multiplayer Online Games.
One of the most interesting parts of StarCraft’s “messiness” is its use of incomplete information as a gameplay parameter. Much of each player’s vision is obscured by a ‘fog of war,’ forcing players to predict one another’s decisions while planning their own. That’s a challenge not faced by developers of artificial intelligence for Chess, for instance, where the whole board is visible at once.

Google and Blizzard Will Help Researchers Use Starcraft to Train Artificial Intelligence

Insights from gameplay could help with real-world AI applications.
At this week’s BlizzCon convention in California, game developer Blizzard announced that it would release tools to allow third parties to teach artificial intelligences to play the real-time wargame Starcraft II. The tools are being developed in collaboration with Google’s DeepMind team, and will use the DeepMind platform.

In a blog post accompanying the announcement, the DeepMind team said Starcraft “is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real world.” The game involves interconnected layers of decisions, as players use resources to build infrastructure and assets before engaging in direct combat.

StarCraft’s complexity when compared to Chess or Go, then, makes it closer to the real-world problems faced by computers which do things like plan logistics networks. Those complex systems still present serious challenges for even the most powerful computers, and insights gleaned from StarCraft could help make their solutions faster and more efficient.


Puzzles, AI and Robotics - the progress continues to accelerate - not just chess or go.

Robot 'sets new Rubik's Cube record'

A robot has just set a new record for the fastest-solved Rubik's Cube, according to its makers.
The Sub1 Reloaded robot took just 0.637 seconds to analyse the toy and make 21 moves, so that each of the cube's sides showed a single colour.
That beats a previous record of 0.887 seconds, which was achieved by an earlier version of the same machine using a different processor.
Infineon provided its chip to highlight advancements in self-driving car tech.

At the press of a button, shutters covering the robot's camera sensors were lifted, allowing it to detect how the cube had been scrambled.
It then deduced a solution and transmitted commands to six motor-controlled arms. These held the central square of each of the cube's six faces in place and spun them to solve the puzzle.
All of this was achieved in a fraction of a second, and it was only afterwards that the number of moves could be counted by checking a software readout.


This short article explores the idea of empathy as both learnable and as a choice we are able to make - this may be important when we consider the current perception of social divisiveness in our political landscapes and the consequences of training our warriors to be more ‘effective’ in conflict.
Even those suffering from so-called empathy deficit disorders like psychopathy and narcissism appear to be capable of empathy when they want to feel it.

Empathy Is Actually a Choice

Not only does empathy seem to fail when it is needed most, but it also appears to play favorites. Recent studies have shown that our empathy is dampened or constrained when it comes to people of different races, nationalities or creeds. These results suggest that empathy is a limited resource, like a fossil fuel, which we cannot extend indefinitely or to everyone.

While we concede that the exercise of empathy is, in practice, often far too limited in scope, we dispute the idea that this shortcoming is inherent, a permanent flaw in the emotion itself. Inspired by a competing body of recent research, we believe that empathy is a choice that we make whether to extend ourselves to others. The “limits” to our empathy are merely apparent, and can change, sometimes drastically, depending on what we want to feel.


Here is one signal of the future of ubiquitous sensors and 3D printing. There is a short video that is worth the view.
“One of the interesting things about the design is that much of the complexity is in the design of the body which is 3D printed,” researcher Mark Yim told Digital Trends. “Since the cost of 3D-printed parts are based on the volume of plastic in the part, and independent of complexity, the flyer is very low-cost.”

This Quarter-Sized, Self-Powered Drone Is the Smallest in the World

The Piccolissimo comes in two sizes, a quarter-sized one weighing less than 2.5 grams and a larger, steerable one that's heavier by 2 grams and wider by a centimeter.
According to the researchers, 100 or 1,000 small controllable flyers like the Piccolissimo could cover more of a disaster site than a single large drone and more cheaply.

Piccolissimo is made possible by UPenn’s ModLab, which specializes in “underactuated” robots that can achieve great ranges of motion with the fewest motors possible. The tiny tech only has two parts: the propeller and the body. The motor spins the body 40 times per second in one direction, while the propeller spins 800 times per second the opposite way. The drone can be steered because the propeller is mounted slightly off-center. Changing the propeller speed at precise points during the drone’s flight changes its direction.


The future of death - is an interesting space to explore - the hippies were the vanguard of reclaiming the birthing experience for women and families away from the institutional conveniences of male-centric operation room control. Now the baby-boomers are nearing the next threshold of social rites of passage - how will we be able to ‘own’ our own and our families passage from life? This is a worthwhile read toward new concepts of ancient burial rites.  There is a 27 min video included in the article.
“The choice to have a green burial reflects a deep understanding of our place in the larger ecosystem and the cosmos,” she tells The Creators Project. Astrophysicists, in particular, have a keen grasp of this concept. Neil deGrasse Tyson has said he wants his body to be “buried not cremated, so that the energy contained gets returned to the earth, so the flora and fauna can dine upon it, just as I have dined.”

Flesh-Eating Mushrooms Are the Future of Burial Tech

This NYFW, Ace Hotel New York will showcase garments no one would wear in this lifetime. Natural Causes is an Infinity Burial Suit exhibition co-curated by Coeio, a "green burial" company that created the suit as a radical alternative to traditional funerary practices. The burial suit spawned from the notion that mushrooms could be used to naturally decompose and cleanse toxins from a dead body, an idea manifested by a spore-laden jumpsuit meant to harmoniously eliminate pollutants while nourishing plants in the burial area. The Ace Hotel gallery show kicks off September 8 and runs through the end of the month.

Coeio, founded by MIT-trained artist and inventor Jae Rhim Lee, began as an artistic provocation meant to challenge cultural attitudes towards death, but quickly grew into a product, co-created with fashion designer Daniel Silverstein, with the potential to revolutionize the funeral industry. Typical American burials involve pumping a body full of synthetic fillers and formaldehyde, a carcinogen, to preserve a corpse and make it look alive. Bodies are entombed in caskets varnished with toxic chemicals, and the EPA rates casket manufacturers as one of the worst hazardous waste generators. Cremation is hardly cleaner; every year, 5,000 lbs of mercury are released into the atmosphere from dental fillings.


This may be good news for scientists who feel that their texting is too constrained.

I can haz more science emoji? Host of nerd icons proposed

At a conference in San Francisco, a group drafted proposals to add more planets, instruments and other science icons to the keyboard.
Science lovers, rejoice! More emoji designed for the nerd in all of us are on their way. This weekend, at the first-ever Emojicon in San Francisco, California, a group of science enthusiasts and designers worked on proposals for several new science-themed emoji. If these are approved, in a year or two, people could be expressing themselves with a heart–eye emoji wearing safety goggles.

On 6 November, the science emoji group submitted a formal proposal to the Unicode Consortium, the organization that oversees the official list of these icons, to include emoji for the other planets — aside from Earth — and Pluto. A second proposal, which the team plans to submit in the coming weeks, includes lab equipment (a beaker, Bunsen burner, fire extinguisher, Petri dish and goggles), a DNA double helix, a microbe, a chemical-element tile for helium, a mole (to represent the unit of measure and the animal) and a water molecule.

These would join existing official science-related emoji, such as a microscope, a telescope and a magnifying glass. There’s also an alembic — a piece of equipment used to distil chemicals — along with common plants and charismatic megafauna such as pandas. Unofficial sets of science-themed emoji include caricatures of Albert Einstein.

Thursday, November 10, 2016

Friday Thinking 11 Nov. 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.


“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:
The Strange Inevitability of Evolution



This tells us two crucial things about the RNA sequence space. First, there are many, many possible sequences that will all serve the same function. If evolution is “searching” for that function by natural selection, it has an awful lot of viable solutions to choose from. Second, the space, while unthinkably vast and multi-dimensional, is navigable: You can change the genotype neutrally, without losing the all-important phenotype. So this is why the RNAs are evolvable at all: not because evolution has the time to sift through the impossibly large number of variations to find the ones that work, but because there are so many that do work, and they’re connected to one another.

These findings uncover a property of biological systems even deeper than the evolutionary processes that shape them. They reveal the landscape on which that shaping took place, and they show that it was only possible at all because the landscape has a very specific topology, in which functionally similar combinations of the component parts—genes, metabolites, protein or nucleic-acid sequences—are connected into vast webs that stretch throughout the whole of the multidimensional space, each intricately woven amidst countless others.

One might argue that the original creative act of the living world was the generation of the components themselves: the chemical ingredients, such as amino acids and sugars, that comprise the molecules of life. But this now seems like the easy part, the kind of happy accident that chemistry can supply given the right raw materials and environment. The harder question is how one can get beyond that passive soup to kick-start Darwinian evolution. Manrubia thinks that this primal creative step might itself be a consequence of the richness and intimate interweaving of neutral (or quasi-neutral) networks. This means that, even for random, abiotically generated RNA sequences, there is a significant chance of finding ones that perform some useful function. “In a sense, you have function for free if the phenotypes are sufficiently represented in sequence space,” she says. And her computer simulations show that such RNA sequences aren’t rare. “So sufficiently good solutions to act as seeds of the evolutionary process might arise in the absence of the evolutionary process itself.” In particular, there’s a fair chance of hitting on sequences that can replicate—and then you’re up and running. “Natural selection can very quickly turn mediocre solutions into fully adaptive ones,” Manrubia says.

Some bacteria seem to undergo more mutations than is “wise” for the individual, if most mutations decrease fitness. An overly simplistic explanation is that many mutations are nonetheless good for the population as a whole because they offer more options for adapting to new environmental challenges. But mutations on robust networks have more chance of being neutral—a feature that is good both for the individual (because it has less chance of incurring deleterious mutations) and for the population (because it provides new ways to adapt when the need arises).

The more complex they are, the more rewiring they tolerate,” says Wagner. Not only does this open up possibilities for electronic circuit design using Darwinian principles, but it suggests that evolvability, and the corollary of creativity or innovability, is a fundamental feature of complex networks like those found in biology.

These ideas suggest that evolvability and openness to innovation are features not just of life but of information itself.

The Strange Inevitability of Evolution



The discussion of a guaranteed livable income seems to be emerging as an important perhaps inevitable foundation for the digital political-economy of the 21st century.

Banker Wahlroos: Basic income only viable solution in face of massive job losses

Björn Wahlroos, Finnish banking magnate and chairman of the board for multiple big-name companies in the Nordics, says robots are slowly replacing skilled labour in the marketplace. He predicts that many Finnish residents will soon be faced with two alternatives: low-paid work or unemployment.


This is full of portents and omens - it is relevant to all political-economies in the 21st century representing a phase transition into new attractors of efficiency. This is reason for a serious effort to transform Internet Access into both a human right and a public infrastructure - a new commons for common wealth.

Shut Down the Internet, and the Economy Goes With It

Government leaders who turn off the Internet as a means of censorship are shooting their economies in the foot.
Governments damage their economies when they shut down Internet applications and services, according to a new analysis.

During the past year, 81 disruptions in 19 countries cost those economies at least $2.4 billion, according a study by Darrell West at the Brookings Institution that estimates the cost of disrupting a nation’s online activities.

Governments can cut off citizens’ Internet access for a variety of reasons, including to quell dissent or force a company to comply with a law. In 2011, the Egyptian government shut down access for five days to prevent communication between protesters, while more recently, Brazil blocked the messaging app WhatsApp after it refused to comply with requests for user data.

As economic activity increasingly relies on the Internet, these kind of disruptions are “very counterproductive,” says West.


This is a long article with great infographics and a downloadable pdf.

The Rise of Co-working

A Growing Workplace Movement
Abstract
Expanding from its beginnings as an experimental office concept for entrepreneurs and technologists, co-working has quickly emerged as an effective workplace strategy for a growing number of corporate organizations. A range of off-site and on-site co-working environments are being explored by businesses to support their ongoing expansion and organisational requirements while accommodating the shifting work preferences and values of an increasingly diverse workforce. Because these shared workspaces often can provide businesses with greater flexibility and efficiency than traditional office leases, the global co-working trend is expected to continue indefinitely. This paper explores the growth and evolution of co-working, including factors contributing to the global movement and specific examples of businesses that are benefiting from co-working strategies within their own organisations. The goal is to equip corporate real estate (CRE) executives with insights and resources to explore co-working as a practical real estate strategy which can contribute to improving an organisation’s overall performance by providing flexible, productive work environments which foster collaboration, innovation, extended networking and passive recruiting. By embracing these progressive ideals, businesses will be well equipped to meet the needs and expectations of their future workforce.


The digital environment has not been kind to pre-digital media - we’ve known this for quite awhile - this article provides a graph that show just how dramatic the decline of print media advertising revenue has been since 2000.

US newspapers lost advertising revenue found

And why the answer to the problem is not about scale.
Everyone in media in the US saw the graph a couple of years ago showing the cliff that the newspaper industry has fallen off with respect to advertising revenue since the beginning of the first decade of the 21st Century thanks to a simple bit of graphing by Mark J. Perry.
Now, media watchers have added the numbers and shown where that money went. Ben Thompson of the Stratechery blog added in Facebook’s revenue rise to show one reason why newspapers in the US are facing even greater headwinds, even as the US economy starts to show a little more life. Thomas Baekdal took it one step further, adding in Google’s revenue. It almost mirrors the decline of newspaper advertising, although Google’s rise seems a bit steeper.


This is a very interesting dystopian perspective of the future of cities - there is a 5 min video that is worth the view.

Bizarre leaked Pentagon video is a science fiction story about the future of cities

Cities in 2030 will be hives of scum and villainy (plus Bitcoin and Anonymous).
This short, untitled film was leaked to The Intercept after being screened as part of an “Advanced Special Operations Combating Terrorism” course convened by Joint Special Operations University (JSOU). Originally made by the Army, it's about how troops will deal with megacities in the year 2030. What's surprising is that it acknowledges social problems that the US government usually ignores or denies.


Two interesting links based on the work of data scientist Cesar Hidalgo (whose work also involve the Atlas of Economic Complexity). The first article is a visualization of the famous Clinton emails. This is interesting more for what can be done with emails than the particular data set. The article has an interactive visualization that demonstrates in a relatively accessible way how a person’s knowledge network can be seen and explored. For anyone interested in knowledge management and understanding how work gets done in an organization this is a must view.

This is what Clinton’s circle of trust looks like from her emails

The MIT Media Lab created a visualization showing connections between the tens of thousands of emails sent by the candidate from a private server when she was secretary of state.
A lot has been said in recent months about the content of Hillary Clinton’s emails and whether they put national security in danger. Thousands of journalists and groups worldwide have dug into the correspondence distributed by Wikileaks, some fueling the controversy, and others defending her from it.

Clinton Circle, a new analysis made by the Macro Connections group from MIT Media Lab is the first graphic proposal that shows the relationships behind email interactions and also, facilitates reading these emails.

Using a tool they had previously created called Immersion, researchers loaded nearly 30,000 private mails sent or received from the Hillary Clinton email address--which have already been published by WikiLeaks.

This article discusses the above data visualization, what was learned in creating it and the experiences of publishing it - worth the read.

What I learned from visualizing Hillary Clinton’s emails

….the whole point of making this tool is that you can use it to come up with your own interpretation of the data. That said, you might be curious about mine, so I’ll share it with you too.

So what I got from reading some of Clinton’s email is another piece of evidence confirming my intuition that political systems scale poorly. The most influential actors on them are spending a substantial fraction of their mental capacity thinking about how to communicate, and do not have the bandwidth needed to deal with many incoming messages (the unresponded-to emails). This is not surprising considering the large number of people they interact with (although this dataset is rather small. I send 8k emails a year and receive 30k. In this dataset Clinton is sending only 2K emails a year).

Our modern political world is one where a few need to interact with many, so they have no time for deep relationships — they physically cannot. So what we are left is with a world of first impressions and public opinion, where the choice of words matters enormously, and becomes central to the job. Yet, the chronic lack of time that comes from having a system where few people govern many, and that leads people to strategize every word, is not Clinton’s fault. It is just a bug that affects all modern political systems, which are ancient Greek democracies that were not designed to deal with hundreds of millions of people.

Here’s a link to Hildalgo’s work on public data - well worth the view for anyone interested in how data visualization can mitigate the classic problem of information overload.

Data USA

THE MOST COMPREHENSIVE VISUALIZATION OF U.S. PUBLIC DATA
In 2014, Deloitte, Datawheel, and Cesar Hidalgo, Professor at the MIT Media Lab and Director of MacroConnections, came together to embark on an ambitious journey -- to understand and visualize the critical issues facing the United States in areas like jobs, skills and education across industry and geography. And, to use this knowledge to inform decision making among executives, policymakers and citizens.

Our team, comprised of economists, data scientists, designers, researchers and business executives, worked for over a year with input from policymakers, government officials and everyday citizens to develop Data USA, the most comprehensive website and visualization engine of public US Government data. Data USA tells millions of stories about America. Through advanced data analytics and visualization, it tells stories about: places in America—towns, cities and states; occupations, from teachers to welders to web developers; industries--where they are thriving, where they are declining and their interconnectedness to each other; and education and skills, from where is the best place to live if you’re a computer science major to the key skills needed to be an accountant.

Data USA puts public US Government data in your hands. Instead of searching through multiple data sources that are often incomplete and difficult to access, you can simply point to Data USA to answer your questions. Data USA provides an open, easy-to-use platform that turns data into knowledge. It allows millions of people to conduct their own analyses and create their own stories about America – its people, places, industries, skill sets and educational institutions. Ultimately, accelerating society’s ability to learn and better understand itself.


The blockchain and/or distributed ledger technologies are developing ever faster and despite more promise than delivery so far - are an inevitable disruptive technology that is only in it early birthing stages.
A blockchain is a digital ledger that records transactions or other data over time. But records in a blockchain can be made effectively indelible using cryptography, and a blockchain can be designed to be operated by a group of companies or individuals together such that no single entity controls the system or its data.
Apache—and the community of developers Behlendorf nurtured to support it—still powers roughly half of all active websites. He wants Hyperledger’s blockchains to be similarly pervasive, if mostly invisible. “If we do our job right you won't ever hear about us,” he says. “We become plumbing.”

Web Pioneer Tries to Incubate a Second Digital Revolution

Twenty years ago, Brian Behlendorf helped kick-start the Web—now he’s betting the technology behind Bitcoin can make the world fairer.
Brian Behlendorf knows it’s a cliché for veteran technologists like himself to argue that society could be run much better if we just had the right software. He believes it anyway.

“I’ve been as frustrated as anybody in technology about how broken the world seems,” he says. “Corruption or bureaucracy or inefficiency are in some ways technology problems. Couldn’t this just be fixed?” he asks.

This summer Behlendorf made a bet that a technology has appeared that can solve some of those apparently human problems. Leaving a comfortable job as a venture capitalist working for early Facebook investor and billionaire Peter Thiel, he now leads the Hyperledger Project, a nonprofit in San Francisco created to support open-source development of blockchains, a type of database that underpins the digital currency Bitcoin by verifying and recording transactions.

Many governments and large companies are exploring blockchain technology not because they want to use digital currency—Bitcoin doesn’t look likely to become widely used—but as a way to work with other kinds of data. They think blockchains could make things as varied as financial trades, digital health records, and manufacturing supply chains more efficient and powerful.


This is a fascinating piece that focuses on language and moral identity - it open the window on thoughts about how language affects other dimensions of our reasoning selves.

How Morality Changes in a Foreign Language

Fascinating ethical shifts come with thinking in a different language
What defines who we are? Our habits? Our aesthetic tastes? Our memories? If pressed, I would answer that if there is any part of me that sits at my core, that is an essential part of who I am, then surely it must be my moral center, my deep-seated sense of right and wrong.

And yet, like many other people who speak more than one language, I often have the sense that I’m a slightly different person in each of my languages—more assertive in English, more relaxed in French, more sentimental in Czech. Is it possible that, along with these differences, my moral compass also points in somewhat different directions depending on the language I’m using at the time?

Psychologists who study moral judgments have become very interested in this question. Several recent studies have focused on how people think about ethics in a non-native language—as might take place, for example, among a group of delegates at the United Nations using a lingua franca to hash out a resolution. The findings suggest that when people are confronted with moral dilemmas, they do indeed respond differently when considering them in a foreign language than when using their native tongue.


The dialectic relationship between the sensors of identification and efforts to sustain anonymity continues. There is a link to the original paper in the article.

Researchers trick facial recognition systems with facial features printed on big glasses

In Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition, researchers from Carnegie-Mellon and UNC showed how they could fool industrial-strength facial recognition systems (including Alibaba's "smile to pay" transaction system) by printing wide, flat glasses frames with elements of other peoples' faces with "up to 100% success."

The glasses cost $0.22/pair.


This new advance in image recognition and transformation from Google is beautiful and potentially enabling of new art forms - sort of like sampling is to music. The very short video and pictures are worth the view.

Supercharging Style Transfer

Pastiche. A French word, it designates a work of art that imitates the style of another one (not to be confused with its more humorous Greek cousin, parody). Although it has been used for a long time in visual art, music and literature, pastiche has been getting mass attention lately with online forums dedicated to images that have been modified to be in the style of famous paintings. Using a technique known as style transfer, these images are generated by phone or web apps that allow a user to render their favorite picture in the style of a well known work of art.

Although users have already produced gorgeous pastiches using the current technology, we feel that it could be made even more engaging. Right now, each painting is its own island, so to speak: the user provides a content image, selects an artistic style and gets a pastiche back. But what if one could combine many different styles, exploring unique mixtures of well known artists to create an entirely unique pastiche?


Here’s something that may be orthogonal disruption of current computational paradigms - but one that complements rather than displaces.
Research indicates that reservoir computers could be extremely robust and computationally powerful and, in theory, could effectively carry out an infinite number of functions. In fact, simulated reservoirs have already become very popular in some aspects of artificial intelligence thanks to precisely these properties. For example, systems using reservoir methods for making stock-market predictions have indicated that they outperform many conventional artificial intelligence technologies. In part, this is because it turns out to be much easier to train AI that harnesses the power of a reservoir than one that does not.

There’s a way to turn almost any object into a computer – and it could cause shockwaves in AI

The latest chip in the iPhone 7 has 3.3 billion transistors packed into a piece of silicon around the size of a small coin. But the trend for smaller, increasingly powerful computers could be coming to an end. Silicon-based chips are rapidly reaching a point at which the laws of physics prevent them being any smaller. There are also some important limitations to what silicon-based devices can do that mean there is a strong argument for looking at other ways to power computers.

Perhaps the most well-known alternative researchers are looking at is quantum computers, which manipulate the properties of the chips in a different way to traditional digital machines. But there is also the possibility of using alternative materials – potentially any material or physical system – as computers to perform calculations, without the need to manipulate electrons like silicon chips do. And it turns out these could be even better for developing artificial intelligence than existing computers.

The idea is commonly known as “reservoir computing” and came from attempts to develop computer networks modelled on the brain. It involves the idea that we can tap into the behaviour of physical systems – anything from a bucket of water to blobs of plastic laced with carbon nanotubes – in order to harness their natural computing power.


The day of the ubiquitous digital text media is technologically very close - the key barriers no longer being technology but rather incumbent business models.

Bendable electronic paper shows full colour scale

Less than a micrometre thin, flexible and giving all the colours that a regular LED display does, it still needs ten times less energy than a Kindle tablet. Researchers at Chalmers University of Technology have developed the basis for a new electronic paper. Their results were recently published in the high impact journal Advanced Materials.


The phase transition in energy geopolitics is past the point of no-return - despite ever more desperate hyperbole from incumbents. The graphs in the article are worth the view. We are still only in the very early phase of harnessing all forms of renewable energy - zero-cost marginal energy.
“What I see is we are witnessing the transformation of energy system markets led by renewables and this is happening very quickly,” said Dr Fatih Birol, executive director of the IEA. “This transformation and the growth of renewables is led by the emerging countries in the years to come, rather than the industrialised countries.”
“The cost of wind dropped by about one third in the last five to six years, and that of solar dropped by 80%,” said Birol, adding that while the cost of gas had also fallen recently, it was not at the same speed that green energy had become cheaper. “The decline in renewables [cost] was very sharp and in a very short period of time. This is unprecedented.”

Renewables made up half of net electricity capacity added last year

Green energy accounted for more than half of net electricity generation capacity added around the world last year for the first time, leading energy experts have found.
The International Energy Agency (IEA) said the milestone was evidence of a rapid transformation in energy taking place, and predicted capacity from renewable sources will grow faster than oil, gas, coal or nuclear power in the next five years.

But the analysts said the outlook in the UK has deteriorated since the Conservative government took power last year and cut support for wind and solar power. The agency’s chief said Britain had huge renewable energy potential and ministers needed to design stronger policies to exploit it.


There have been some discussion about how carbon-based energy is still a growing proposition. Not only are renewables accelerating in implementation but the continued decline of renewable costs is reversing recent commitments to older forms of energy. This doesn’t bode well for incumbents seeking to build more oil pipelines.

China Halts Construction On 17 Gigawatts Of Coal-Fired Plants

The Chinese authorities have halted construction on 30 large coal-fired power plants with a combined capacity of 17 GW — a figure that is greater than the entire coal fleet of the United Kingdom — underscoring the country’s desires to minimize its reliance upon coal-generated electricity.

Greenpeace’s Energydesk reported the move, referring to Chinese-language news reports that also claim China is dramatically downscaling plans for transmitting coal-fired electricity from the west of China to the coast, via a network of long-distance transmission lines.

On top of that, a further 30 large coal-fired power plants are also being scrapped — ten of which were already under construction.


There are many forms of renewables. In a healthy ecology all outputs are inputs elsewhere - if not they are a toxin and a waiting niche opportunity.

This material is stronger, cheaper and greener than plastic. And it's made from pollution

By weight, AirCarbon is about 40% air and 60% greenhouse gas. No oil. No fossil fuels. Just air and captured carbon emissions that would otherwise become part of the air, combined.

AirCarbon is a special material. It is produced in most known living organisms, from humans to tigers to trees; an evolutionary ancient molecule that is used to store carbon. It is biodegradable, as strong as plastic, and it can be melted and formed into shapes.

Over the past thirteen years, we figured out how to make it from air and greenhouse gas. Around the clock at this plant, our team watches, and adjusts, and optimizes.

In the past 15 months, Newlight has signed £74 billion of AirCarbon in off-take purchase or licensed production agreements: global scale agreements that will create significant value by reducing cost for consumers, moving oil out of our products, and reducing the amount of carbon in the air.


And another innovation transforming our trash.
“Imagine that the tons of metal waste discarded every year could be used to provide energy storage for the renewable energy grid of the future, instead of becoming a burden for waste processing plants and the environment,” said Cary Pint, assistant professor of mechanical engineering at Vanderbilt University.

Making high-performance batteries from junkyard scraps

Take some metal scraps from the junkyard; put them in a glass jar with a common household chemical; and, voilà, you have a high-performance battery.

To make such a future possible, Pint headed a research team that used scraps of steel and brass – two of the most commonly discarded materials – to create the world’s first steel-brass battery that can store energy at levels comparable to lead-acid batteries while charging and discharging at rates comparable to ultra-fast charging supercapacitors.

The secret to unlocking this performance is anodization, a common chemical treatment used to give aluminum a durable and decorative finish. When scraps of steel and brass are anodized using a common household chemical and residential electrical current, the researchers found that the metal surfaces are restructured into nanometer-sized networks of metal oxide that can store and release energy when reacting with a water-based liquid electrolyte.


And of course the inevitable realization is. The graph included in the article reveals that South Korea and EU nations are top recyclers.

We can recycle everything we use, including cigarette butts and toothbrushes. So why don’t we?

Within the broad range of sustainability concepts and activities, recycling is without doubt the most easily understood and accessible: individuals and groups, old and young, communities and institutions can participate.

When we buy a candy bar, we own the wrapper after the short life of the product; doing something with that branded possession, rather than adding to waste, feels good. Recycling is empowering to consumers and, in the case of traditionally recyclable materials such as glass, paper, rigid plastics and certain metals, economically viable. Recycling not only diverts potentially valuable materials from landfills and incinerators, it also offsets demand for virgin materials, helping to keep carbon in the ground. Recycling aligns human consumption with nature’s activities.

But as human-generated waste streams continue to evolve in diversity and volume, the global community faces the mounting challenge of developing viable recycling and waste management solutions at a comparable pace.


There might be nothing new under the sun - but here’s something I didn’t know about the moon - in fact I think few people knew how unique and erratic the moon’s orbit really was. The graphic explains the situation very clearly.

New model explains the moon's weird orbit

Simulations suggest a dramatic history for the Earth-moon duo
The moon, Earth's closest neighbor, is among the strangest planetary bodies in the solar system. Its orbit lies unusually far away from Earth, with a surprisingly large orbital tilt. Planetary scientists have struggled to piece together a scenario that accounts for these and other related characteristics of the Earth-moon system. A new research paper, based on numerical models of the moon's explosive formation and the evolution of the Earth-moon system, comes closer to tying up all the loose ends than any other previous explanation.


Here is some great news for travellers and probably a little later for libraries and all organizations and homes with ‘inventories’ they want to track.

RFIDs are set almost to eliminate lost luggage

No nasty surprises at the carousel
HAVING bags go astray on a flight is rare but infuriating. Indeed, according to a study by Skytrax, lost luggage was passengers’ number-one complaint last year, beating even flight delays and cramped seats. That frustration could become rarer still. The aviation industry is increasingly using radio frequency identification devices (RFIDs, pictured) to track bags. Airlines have begun to attach these RFIDs to luggage tags. Doing so could significantly reduce the number of bags that are mishandled.

Research by SITA, an IT firm, and the International Air Transport Association, an industry association, found that a widespread adoption of RFIDs could allow 99% of bags to be tracked successfully. Already, other new technologies have cut the number of mishandled bags in half since 2007. RFIDs could reduce that number by a further 25% over the next six years, even as volume continues to increase.

...when might RFIDs become the norm? On Delta, the process is already well underway. The airline invested $50m in RFIDs this year and has equipped 84 American airports with the technology to add them to its tags; international airports are expected to follow soon. Delta has also launched an app  that enables passengers to track their bags on a map, so they can confirm that their luggage is headed to the right place (or discover its whereabouts if it isn’t).


The theory of evolution is also evolving in very important ways - this is an excellent article exploring the complexity of the gene pool and DNA as we continue to learn and domesticate evolution itself. This is a longish article - but well worth the read for anyone interested in how evolution works. Even more importantly is the issue that optimal solutions are less important than simple viable solutions - in the continual evolution of life - evolution as an eternal ‘beta’ world that transforms all participants into eternal ‘newbies’.
Exactly which genes you have may not matter so much (within reason), because the job they do is more a property of the network in which they are embedded.
The same explosion of combinatorial options happens for proteins, which are molecules made up of many tens to hundreds of amino acids bound together in chains and folded up into particular molecular shapes. There are 20 different amino acids found in natural proteins, and for proteins just 100 amino acids long (which are small ones) the number of permutations is 10130. Yet the 4 billion years of evolution so far have provided only enough time to create around 1050 different proteins. So how on earth has it managed to find ones that work?
…. a “dirty secret” behind the success of the so-called modern synthesis of Darwinian evolutionary theory and genetics. How does evolution find workable solutions when it lacks the means to explore even a small fraction of the options? And how does evolution find its way from an existing solution to a viable new one—how does it create? The answer is, at least in part, a simple one: It’s easier than it looks. But only because the landscape that the evolutionary process explores has a remarkable structure, and one that neither Darwin nor his successors who merged Darwinism with genetics had anticipated.
These ideas suggest that evolvability and openness to innovation are features not just of life but of information itself.

The Strange Inevitability of Evolution

Good solutions to biology’s problems are astonishingly plentiful.
You don’t have to be a benighted creationist, nor even a believer in divine providence, to argue that Darwin’s astonishing theory doesn’t fully explain why nature is so marvelously, endlessly inventive. “Darwin’s theory surely is the most important intellectual achievement of his time, perhaps of all time,” says evolutionary biologist Andreas Wagner of the University of Zurich. “But the biggest mystery about evolution eluded his theory. And he couldn’t even get close to solving it.”

What Wagner is talking about is how evolution innovates: as he puts it, “how the living world creates.” Natural selection supplies an incredibly powerful way of pruning variation into effective solutions to the challenges of the environment. But it can’t explain where all that variation came from. As the biologist Hugo de Vries wrote in 1905, “natural selection may explain the survival of the fittest, but it cannot explain the arrival of the fittest.” Over the past several years, Wagner and a handful of others have been starting to understand the origins of evolutionary innovation. Thanks to their findings so far, we can now see not only how Darwinian evolution works but why it works: what makes it possible.

A popular misconception is that all it takes for evolution to do something new is a random mutation of a gene—a mistake made as the gene is copied from one generation to the next, say. Most such mutations make things worse—the trait encoded by the gene is less effective for survival—and some are simply fatal. But once in a blue moon (the argument goes) a mutation will enhance the trait, and the greater survival prospects of the lucky recipient will spread that beneficial mutation through the population.