Thursday, January 28, 2016

Friday Thinking 29 January 2016

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. 

In the 21st Century curiosity will SKILL the cat.

The plunging fixed costs of digital technology, the near zero marginal cost of utilizing it and the intrinsic interconnected nature of the technology itself is what has enabled a qualitative leap in "velocity, scope, and systems impact" for the past twenty-five years. … Wherever digital technology has spread -- personal computers, cell phones, the World Wide Web, social media, data storage, digital music and video, renewable energy technology, fabrication technology, robotics, artificial intelligence, gene splicing and gene sequencing, synthetic biology, GPS tracking, and now the Internet of Things -- the velocity, scope, and systems impact has been both exponential and transformative. Again, this has been going on for decades.

The music industry, television, the news media, the knowledge sector, and more recently, the energy sector, transport sector, and retail sector have been massively disrupted and diminished by the free sharing of music, YouTube videos, e-books, social media, Wikipedia, and Massive Open Online Courses at near Zero Marginal Cost. Millions of people are also producing renewable energy at near Zero Marginal Cost, car sharing and home sharing at low marginal cost, producing 3D printed products at low marginal cost, and increasingly transferring their shopping to virtual retail. At the same time, while traditional industries have declined, thousands of new entrepreneurial enterprises -- some profit driven, others nonprofit -- have arisen. These new enterprises are harnessing the productivity potential of the digital revolution by creating the digital platforms, algorithms, apps, and interconnections, speeding humanity into the digital era and a Third Industrial Revolution.

Here's a better way of understanding the current era. The digital revolution of the past forty years is maturing with each new network interconnection into a system-wide phenomenon, changing the way we work, live, and govern ourselves. Like the First and Second Industrial Revolutions, the system comes together when three defining technologies emerge and converge to create what we call in engineering, a general purpose technology platform that fundamentally changes the way we manage, power, and move economic activity: new communication technologies to more efficiently manage economic activity; new sources of energy to more efficiently power economic activity; and new modes of transportation to more efficiently move economic activity. Each of these defining technologies interacts with each other to enable the system to operate as a whole... For example, in the 19th century, steam-powered printing and the telegraph, abundant coal, and locomotives on national rail systems gave rise to the First Industrial Revolution. In the 20th Century, centralized electricity, the telephone, radio and television, cheap oil, and internal combustion vehicles on national road systems converged to create an infrastructure for the Second Industrial Revolution.

Today, the system-wide infrastructure is being scaled up and built out for the Third Industrial Revolution. The digitalized communication Internet is converging with a digitalized renewable Energy Internet, and a digitalized, GPS-guided and soon driverless Transportation and Logistics Internet, to create a super-Internet to manage, power, and move economic activity across society's value chains. These three Internets ride atop a platform called the Internet of Things. In the Internet of Things era, sensors will be embedded into every device and appliance, allowing them to communicate with each other and Internet users, providing up to the moment data on the managing, powering, and moving of economic activity in a smart digital society.

Many of today's global corporations will successfully manage the transition by adopting the new distributed and collaborative business models of the Third Industrial Revolution while continuing their traditional Second Industrial Revolution business practices. In the coming years, capitalist enterprises will likely find more value in aggregating and managing laterally scaled networks than in selling discrete products and services in vertically integrated markets.

In the digitalized Third Industrial Revolution, social capital is as vital as market capital, access is as important as ownership, sustainability supersedes consumerism, collaboration is as crucial as competition, virtual integration of value chains gives way to lateral economies of scale, intellectual property makes room for open sourcing and creative commons licensing, GDP becomes less relevant, and social indicators become more valuable in measuring the quality of life of society, and an economy based on scarcity and profit vies with a Zero Marginal Cost Society where an increasing array of goods and services are produced and shared for free in an economy of abundance.
Jeremy Rifkin - The 2016 World Economic Forum Misfires With Its Fourth Industrial Revolution Theme

Lest you think that I fear “big data,” let me take a moment to highlight the potential. I’m on the board of Crisis Text Line, a phenomenal service that allows youth in crisis to communicate with counselors via text message. We’ve handled millions of conversations with youth who are struggling with depression, disordered eating, suicidal ideation, and sexuality confusion. The practice of counseling is not new, but the potential shifts dramatically when you have millions of messages about crises that can help train a system designed to help people. Because of the analytics that we do, counselors are encouraged to take specific paths to suss out how they can best help the texter. Natural language processing allows us to automatically bring up resources that might help a counselor or encourage them to pass the conversation to a different counselor who may be better suited to help a particular texter. In other words, we’re using data to empower counselors to better help youth who desperately need our help. And we’ve done more active rescues during suicide attempts than I like to count (so many youth lack access to basic mental health services).

The techniques we use at Crisis Text Line are the exact same techniques that are used in marketing. Or personalized learning. Or predictive policing. Predictive policing, for example, involves taking prior information about police encounters and using that to make a statistical assessment about the likelihood of crime happening in a particular place or involving a particular person. In a very controversial move, Chicago has used such analytics to make a list of people most likely to be a victim of violence. In an effort to prevent crime, police officers approached those individuals and used this information in an effort to scare them to stay out of trouble. But surveillance by powerful actors doesn’t build trust; it erodes it. Imagine that same information being given to a social worker. Even better, to a community liaison. Sometimes, it’s not the data that’s disturbing, but how it’s used and by whom.

Data is power. Increasingly we’re seeing data being used to assert power over people. It doesn’t have to be this way, but one of the things that I’ve learned is that, unchecked, new tools are almost always empowering to the privileged at the expense of those who are not.

We are moving into a world of prediction. A world where more people are going to be able to make judgments about others based on data. Data analysis that can mark the value of people as worthy workers, parents, borrowers, learners, and citizens. Data analysis that has been underway for decades but is increasingly salient in decision-making across numerous sectors. Data analysis that most people don’t understand.
Danah Boyd - What World Are We Building?

I've noticed though that computer game designers don't look much to the past. All their idealized classics tend to be in reverse, they're projected into the future. When you're a game designer and you're waxing very creative and arty, you tend to measure your work by stuff that doesn't exist yet. Like now we only have floppies, but wait till we get CD-ROM. Like now we can't have compelling lifelike artificial characters in the game, but wait till we get AI. Like now we waste time porting games between platforms, but wait till there's just one standard. Like now we're just starting with huge multiplayer games, but wait till the modem networks are a happening thing. And I -- as a game designer artiste -- it's my solemn duty to carry us that much farther forward toward the beckoning grail....

For a novelist like myself this is a completely alien paradigm. I can see that it's very seductive, but at the same time I can't help but see that the ground is crumbling under your feet. Every time a platform vanishes it's like a little cultural apocalypse. And I can imagine a time when all the current platforms might vanish, and then what the hell becomes of your entire mode of expression? Alan Kay -- he's a heavy guy, Alan Kay -- he says that computers may tend to shrink and vanish into the environment, into the walls and into clothing.... Sounds pretty good.... But this also means that all the joysticks vanish, all the keyboards, all the repetitive strain injuries.

I'm sure you could play some kind of computer game with very intelligent, very small, invisible computers.... You could have some entertaining way to play with them, or more likely they would have some entertaining way to play with you. But then imagine yourself growing up in that world, being born in that world. You could even be a computer game designer in that world, but how would you study the work of your predecessors? How would you physically *access* and *experience* the work of your predecessors? There's a razor-sharp cutting edge in this art-form, but what happened to all the stuff that got sculpted?

I don't think it's any accident that this is happening.... I don't think that as a culture today we're very interested in tradition or continuity. No, we're a lot more interested in being a New Age and a revolutionary epoch, we long to reinvent ourselves every morning before breakfast and never grow old. We have to run really fast to stay in the same place. We've become used to running, if we sit still for a while it makes us feel rather stale and panicky. We'd miss those sixty-hour work weeks.
...don't tie my words and my thoughts to the fate of a piece of hardware, because hardware is even more mortal than I am...
Bruce Sterling. The Wonderful Power of Storytelling
Computer Game Developers Conference, March 1991

Schwab compares Detroit in 1990 with Silicon Valley in 2014. In 1990 the three biggest companies in Detroit had a market capitalisation of $36bn (£25bn), revenues of $250bn and 1.2 million employees. In 2014, the three biggest companies in Silicon Valley had a considerably higher market capitalisation ($1.09tn) generated roughly the same revenues ($247bn) but with about 10 times fewer employees (137,000).
Fourth Industrial Revolution brings promise and peril for humanity

Here’s an important article by Jeremy Rifkin concerning the DAVOS theme of a Fourth Industrial Revolution - or whether it is the actualization if the Third Industrial Revolution. The important point is not whether it’s the 3rd or 4th but what this stage of technological development means for human political-economies and governance.
The 2016 World Economic Forum Misfires With Its Fourth Industrial Revolution Theme
Global business leaders, heads of state, public intellectuals, and NGO's will be making their annual pilgrimage to the tiny ski resort village of Davos, Switzerland on January 20th through 23rd. The forum is a unique venue crafted by the German economist Klaus Schwab more than 40 years ago. Its primary mission is to engage the world's elite in future forecasting, with the objective of preparing them for "the next big thing."

While the central theme of each year's forum is often spot on and, more often than not, inspiring and thought-provoking, occasionally the forum misfires.

This year, the central theme is the Fourth Industrial Revolution. Professor Schwab introduced the theme in a lengthy essay published in Foreign Affairs in December 2015. He argues that we are on the cusp of a Fourth Industrial Revolution that will fundamentally change the way we work and live in the coming decades. Much of the essay's text eloquently describes the vast technological changes brought on by the digitalization of economic and social life and its disruptive impact on conventional business practices and social norms. I don't disagree. Where I take exception is with Professor Schwab's suggestion that these initiatives represent a Fourth Industrial Revolution.

Schwab says that the First Industrial Revolution introduced steam-powered and mechanized production. The Second Industrial Revolution introduced electric power and mass-production processes. The Third Industrial Revolution introduced the digitalization of technology. He then declares that "now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the end of the last revolution. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres."

But here's the rub. The very nature of digitalization -- which characterizes the Third Industrial Revolution -- is its ability to reduce communications, visual, auditory, physical, and biological systems, to pure information that can then be reorganized into vast interactive networks that operate much like complex ecosystems. In other words, it is the interconnected nature of digitalization technology that allows us to penetrate borders and "blur the lines between the physical, digital, and biological spheres." Digitalization's modus operandi is "interconnectivity and network building." That's what digitalization has been doing, with increasing sophistication, for several decades. This is what defines the very architecture of the Third Industrial Revolution. All of which raises the question: why, then, a Fourth Industrial Revolution?

David Graeber’s “Debt: The First 5000 Years” is a must read for anyone interested in the history of currency and what money really is. This interview is focused on his most recent book (also a great read - highly recommended).
David Graeber. I don’t think social movements failed. I have a theory about that: it’s called the “3.5 years historical lag”. After the financial crisis hit, back in 2008, security forces all around the world started gearing up for the inevitable protest movements. Yet, after a year or two, it felt like nothing was going to happen after all. And suddenly, in 2011 – though nothing particular had happened that year — it started. Like in 1848 or in 1968, the social movements are not about seizing power right away: it’s about changing the way we think about politics. And at this level, I think there has been a profound change. Many expected Occupy to take a formal political form. True, it did not happen, but look at where we are 3.5 years later: in most countries where substantial popular movements happened, left parties are now switching to embrace these movements’ sensibilities (Greece, Spain, United States, etc.). Maybe it will take another 3.5 years for them to have an actual impact on policy making, but it seems to me like the natural path of things.

You see, we live in a society of instant gratification: we expect that we are going to click and that something will happen. That’s not the way social movements work. Change does not happen overnight. It took a generation for the abolitionist or the feminist movement to reach their objective, and both managed to remove institutions that had been around for centuries!

Yannis Varoufakis is at the forefront of change in the transformation of economics - the former Greece Minister of Finance and Economist in Residence at Valve (one of the world’s largest video game companies). This recent 20 min TED talk is a must view for anyone interested in the future of economics and the emerging paradigm change.
Capitalism will eat democracy — unless we speak up
Have you wondered why politicians aren't what they used to be, why governments seem unable to solve real problems? Economist Yanis Varoufakis, the former Minister of Finance for Greece, says that it's because you can be in politics today but not be in power — because real power now belongs to those who control the economy. He believes that the mega-rich and corporations are cannibalizing the political sphere, causing financial crisis. In this talk, hear his dream for a world in which capital and labor no longer struggle against each other, "one that is simultaneously libertarian, Marxist and Keynesian."

Another TED talk (17 min) talking about the difference between ‘black swans’ and ‘dragon kings’ - which describe ‘extreme events’ in a class of their own - outliers. These are somewhat predictable.
Didier Sornette: How we can predict the next financial crisis
The 2007-2008 financial crisis, you might think, was an unpredictable one-time crash. But Didier Sornette and his Financial Crisis Observatory have plotted a set of early warning signs for unstable, growing systems, tracking the moment when any bubble is about to pop. (And he's seeing it happen again, right now.)

This is a very interesting article, putting together a range of research - although it focused on the U.S. there are many indicators worth thinking about for any nation or political-economy.
The Strange Disappearance of Cooperation in America
The title of this blog is a paraphrase of a 1995 article by Robert Putnam, “The Strange Disappearance of Civic America.”Robert Putnam is a political scientist at Harvard who over the last 20 years has been documenting the decline of ‘social capital’ in America.

Putnam has argued, in particular, that last several decades saw lower levels of trust in government, lower levels of civic participation, lower connectedness among ordinary Americans, and lower social cooperation.

This is a puzzling development, because from its inception the American society was characterized, to an unusual degree, by the density of associational ties and an abundance of social capital. Almost 200 years ago that discerning observer of social life, Alexis de Tocqueville, wrote about the exceptional ability of Americans to form voluntary associations and, more generally, to cooperate in solving problems that required concerted collective action. This capacity for cooperation apparently lasted into the post-World War II era, but several indicators suggest that during the last 3-4 decades it has been unraveling.

This is a very brief article discussing the evolution of Wikimedia of which Wikipedia is one application. This is actually quite exciting.
Wikia Launches Fandom, a New Place to Get Your Nerd On
One of the largest entertainment fan sites around is spawning its own sequel: Fandom.
Fandom is an outgrowth of Wikia, a site where the vast majority of the content is created by devotees who provide an encyclopedic accounting of the topic — be it video games like Fallout, television shows like Marvel’s “Daredevil” on Netflix or movies like “Star Wars: The Force Awakens.”

Wikia’s millions of pages of content provide synopses plus deep dives on plot, characters, timelines, trivia and more. Thousands of users spent untold hours building out these comprehensive guides, Wales said.

But Fandom breaks from the Wikia format. It’s a pure media destination, with an editorial staff writing original stories, creating videos and picking content from elsewhere on the Web as well as from Wikia’s 345,000 communities. Fans, who Jimmy Wales, founder and chairman of Wikia and Wikipedia, describes as the ultimate subject-matter experts, will also submit contributed works.
“We have this passionate group of people,” said Wales. “Now they’ve got a voice.”

The new media site will provide original videos, news and features, as well as curated content — all with a focus on the entertainment-obsessed fan. The site is envisioned as a one-stop shop for all things pop culture, encompassing TV shows, movies, video games and more.
“It basically puts fans first,” said Wales, who is expected to announce the launch of Fandom Monday morning at the IAB annual leadership meeting in Palm Desert, Calif.
Here is the Link to FANDOM:

This is a 12 min read from Danah Boyd - a sort of summary of her engagement with technology and the Internet - this is well worth the read for anyone interested in the social aspects of the impact of technology since the rise of the Internet including her research on how youth use it and the class-race aspects of the shift from ‘MySpace’ to ‘Facebook’.
What World Are We Building?
It’s easy to love or hate technology, to blame it for social ills or to imagine that it will fix what people cannot. But technology is made by people. In a society. And it has a tendency to mirror and magnify the issues that affect everyday life. The good, bad, and ugly.

In the early days of social network sites, it was exhilarating watching people grasp that they were part of a large global network. Many of my utopian-minded friends started dreaming again of how this structure could be used to break down social and cultural barriers. Yet, as these tools became more popular and widespread, what unfolded was not a realization of the idyllic desires of many early developers, but a complexity of practices that resembled the mess of everyday life.

In 2006–2007, I watched a historic practice reproduce itself online. I watched a digital white flight [.pdf]. Like US cities in the 70s, MySpace got painted as a dangerous place filled with unsavory characters, while Facebook was portrayed as clean and respectable. With money, media, and privileged users behind it, Facebook became the dominant player that attracted everyone. And among youth, racial divisions reproduced themselves again, shifting, for example, to Instagram (orderly, safe) and Vine (chaotic, dangerous).

Teenagers weren’t creating the racialized dynamics of social media. They were reproducing what they saw everywhere else and projecting onto their tools. And they weren’t alone. Journalists, parents, politicians, and pundits gave them the racist language they reiterated.

This is a great 20 min video interview with Ray Kurzweil and his ‘greatest sceptic’ -covers lots of ground in a short time - including the dependability of general predictions of technological development based on exponential curves.
Neil deGrasse Tyson vs. Ray Kurzweil On The Singularity

The future of our economies and societies will increasingly depend on creative innovation - here’s a 7 min video discussing the recent book “Geography of Genius”. The smart cities, rich relationships and the designed serendipity of ‘third places’ will be key to a sustainable future
Hotbeds of genius and innovation depend on these key ingredients
What kind of environment spawns genius? That’s the question Eric Weiner tackles in his latest book, “The Geography of Genius,” in which Weiner journeys around the world and through time, from Plato’s Athens to Leonardo da Vinci’s Florence, to find the secret ingredients behind some of the greatest minds in history, and what it means for America today. Economics correspondent Paul Solman reports.

This is a longish 1 ½ hour video by Monica Anderson about model-free methods for the science of complex problems. This is well worth the view - for any scientist researching social and living systems.
Monica Anderson: Science Beyond Reductionism
Monica Anderson is CEO of Syntience Inc. and originator of a theory for learning called "Artificial Intuition" that may allow us to create computer based systems that can understand the meaning of language in the form of text. Here she discusses the ongoing paradigm shift - the "Holistic Shift" - which started in the life sciences and is spreading to the remaining disciplines. Model Free Methods (also known as Holistic Methods) are an increasingly common approach used on "the remaining hard problems", including problems in the domain of "AI" - Problems that require intelligence. She illustrates this using a Model Free approach to the NetFlix Challenge. Her website provides some background information.

The huge platforms of Google and Facebook are both racing to provide Internet Access everywhere-everyware. There’s lots of controversy over Facebook’s gated-platform version. Google is already delivering 1gb fiber-optic to some American cities and now in India it delivering free public wifi.
Waiting for your train in Mumbai? How about streaming some HD videos while you wait
Trains are the lifeblood of India, and train stations sit at the heart of most cities across the country. More than 23 million people, equal to the total population of Australia, get on a train in India every day. Inevitably, many of them end up spending a lot of time in train stations.

Starting today, those passing through Mumbai Central station will have access to something that we hope will make their wait a bit more enjoyable and productive — free, high-speed Wi-Fi. So, if you’re one of the 100,000 people who’ll pass through Mumbai Central today, go ahead, stream the video below in HD to learn more. After that, how about sending those last minute work emails, downloading a new game or offlining a few YouTube videos to keep the kids, and yourself, entertained on the journey ahead.

While we’re thrilled to have the Wi-Fi at this station up and running, it’s really just a small first step. As our CEO, Sundar Pichai, said when we first announced this project, this is just the first of 100 train stations we’ll be bringing online by the end of the year. And one of 400 stations, across every part of India, that we aim to reach in the years ahead in partnership with Indian Railways and RailTel.

The Wi-Fi will be entirely free to start, so you can stream and download to your heart’s content. While there will always be some level of free Wi-Fi available, the long-term goal will be making this self-sustainable to allow for expansion to more stations and places, with RailTel and other partners, in the future. Also, to make sure that a few people spending all day in the station downloading lots of big files don’t slow down the network for everyone, users might notice a drop in speed after their first hour on the network. Most people should still be able to do the things they’ll want to do online.

When we think of science research we think of research publication as a measure of a scientist’s contribution and accomplishment. This is an interesting Nature piece asking the question of how else to give due credit to a scientist’s full body of work. This should not only include video presentations but also significant pieces of coding and instrumentation.
The unsung heroes of scientific software
Creators of computer programs that underpin experiments don’t always get their due — so the website Depsy is trying to track the impact of research code
For researchers who code, academic norms for tracking the value of their work seem grossly unfair. They can spend hours contributing to software that underpins research, but if that work does not result in the authorship of a research paper and accompanying citations, there is little way to measure its impact.

Take Klaus Schliep, a postdoctoral researcher who is studying evolutionary biology at the University of Massachusetts in Boston. His Google Scholar page lists the papers that he has authored — including his top-cited work, an article describing phylogenetics software called phangorn — but it does not take into account contributions that he has made to other people’s software. “Compared to writing papers, coding is treated as a second-class activity in science,” Schliep says.
Enter Depsy, a free website launched in November 2015 that aims to “measure the value of software that powers science”.

Schliep’s profile on that site shows that he has contributed in part to seven software packages, and that he shares 34% of the credit for phangorn. Those packages have together received more than 2,600 downloads, have been cited in 89 open-access research papers and have been heavily recycled for use in other software — putting Schliep in the 99th percentile of all coders on the site by impact. “Depsy does a good job in finding all my software contributions,” says Schliep.

Depsy’s creators hope that their platform will provide a transparent and meaningful way to track the impact of software built by academics. The technology behind it was developed by Impactstory, a non-profit firm based in Vancouver, Canada, that was founded four years ago to help scientists to track the impact of their online output. That includes not just papers but also blog posts, data sets and software, and measuring impact by diverse metrics such as tweets, views, downloads and code reuse, as well as by conventional citations.

This is a fine example of using government to stimulate the development of research and applications.
DARPA making fully implantable devices able to connect with up to one million neurons for breakthrough computer-brain interfacing
A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.

The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.

“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”

The Neural Engineering System Design (NESD) program seeks innovative research proposals to design, build, demonstrate, and validate in animal and human subjects a neural interface system capable of recording from more than one million neurons, stimulating more than one hundred thousand neurons, and performing continuous, simultaneous full-duplex (read and write) interaction with at least one thousand neurons in regions of the human sensory cortex. In addition to achieving substantial advances in scale of interface (independent channel count), proposed systems must also demonstrate simultaneous high-precision in neural activity detection, transduction, and encoding, with single-neuron spike-train precision for each independent channel.

This is another breakthrough related to the domestication of DNA and new medical approaches.
Eye Disease Gene Defect Repaired with CRISPR
Gene editing techniques repaired a defective gene in stem cells from an individual with retinitis pigmentosa, an inherited eye disease. The team from Columbia University and University of Iowa reported its findings in today’s (27 January) issue of Scientific Reports.

Retinitis pigmentosa is a family of genetic eye disorders that result in damage to the retina, specifically breakdown and failure of photoreceptor cells in the retina leading to progressive vision loss. The disease takes several forms, but it generally causes failure of photoreceptor cells detecting light and color, and helping eyes see in dim light. As a result, people with retinitis pigmentosa may experience symptoms such as night blindness and loss of peripheral vision. The organization Research to Prevent Blindness says some 100,000 people in the U.S. have retinitis pigmentosa.

The researchers led by ophthalmology professors Stephen Tsang at Columbia and Vinit Mahajan at Iowa, are seeking therapies for retinitis pigmentosa that address the underlying causes rather than easing symptoms. Treating retinitis pigmentosa is difficult, even with gene therapies, because of a single defect responsible for the disorder and tight configuration of related base pairs in the genome.

Thus, an editing technique such as CRISPR, with its ability to make precise edits in genes, was considered a good candidate to meet this objective. CRISPR — short for clustered, regularly interspaced short palindromic repeats — is a technology based on bacterial defense mechanisms that use RNA to identify and monitor precise locations in DNA. The actual editing of genomes with CRISPR uses an enzyme known as CRISPR-associated protein 9 or Cas9. With this approach to CRISPR, RNA molecules guide Cas9 proteins to specific genes needing repair, making it possible to address root causes of retinitis pigmentosa and other inherited diseases.

I recently read “How Do You Feel: An Interoceptive Moment with Your Neurobiological Self” A fascinating book well worth the read for anyone interested in neurological issues and the link between embodied sensorium, neurology and the ‘sense of self’ based on mechanism-structures of homeostasis. The important point for this next article is the very large difficulties and challenges involved in actually ‘seeing and imaging’ neural structures, such that the breakthrough represented below is indeed significant for cognitive science.
Neuroscientists develop new tools to safely trace brain circuits
Neutered strain of rabies virus maps brain activity in real time; can shed light on how brain cells guide behavior
Scientists have developed a new viral tool that dramatically expands scientists' ability to probe the activity and circuitry of brain cells, or neurons, in the mouse brain. These findings highlight an innovative feat of molecular engineering that allows the creation of a more complete map of the brain's cellular circuits and will help researchers on their way toward unraveling the mysteries of the brain.

Scientists at Columbia University's Mortimer B. Zuckerman Mind Brain Behavior Institute have developed a new viral tool that dramatically expands scientists' ability to probe the activity and circuitry of brain cells, or neurons, in the mouse brain. These findings highlight an innovative feat of molecular engineering that allows the creation of a more complete map of the brain's cellular circuits and will help researchers on their way toward unraveling the mysteries of the brain.

This research was reported today in the journal Neuron.
"At its core, every sensation, thought and movement depends on how the brain's billions of neurons communicate through a complex system of circuits," said Thomas M. Jessell, PhD, the paper's co-senior author and a director of the Zuckerman Institute. "In this study, we developed a strain of rabies that greatly improves our ability to map these circuits, which can lend insight into how these circuits direct behavior -- in health and in disease," added Dr. Jessell, who is also the Claire Tow professor of neuroscience and of biochemistry and molecular biophysics at Columbia.

Because rabies only infects neurons, scientists have long worked to create a modified, safer version of the virus that can travel from cell to cell in the brain -- lighting a path as it goes -- providing a kind of visual map of connections. Researchers had struggled to find success, until eight years ago, when a team from the Salk Institute created an innovative new system.

Another milestone passed in the development of AI - first computers beat chess, then Jeopardy and now Go. But what makes this even more amazing is that the AI was not a designed specifically to play go - but learned the game on its own. This is more than amazing - trying to imagine what the implications are for the next decade - even if we only think of this for the development of next generation video games. Combining these developments with Watson and thinking of enhancing our cognitive capacities ….. time to start re-imagining everything.
...the game has long interested AI researchers because of its complexity. The rules are relatively simple: the goal is to gain the most territory by placing and capturing black and white stones on a 19 × 19 grid. But the average 150-move game contains more possible board configurations — 10170 — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move.
AlphaGo plays in a human way, says Fan. “If no one told me, maybe I would think the player was a little strange, but a very strong player, a real person.”
This may be a Turing Test type of accomplishment!
Google AI algorithm masters ancient game of Go
Deep-learning software defeats human professional for first time.
A computer has beaten a human professional for the first time at Go — an ancient board game that has long been viewed as one of the greatest challenges for artificial intelligence (AI).

The best human players of chess, draughts and backgammon have all been outplayed by computers. But a hefty handicap was needed for computers to win at Go. Now Google’s London-based AI company, DeepMind, claims that its machine has mastered the game.

DeepMind’s program AlphaGo beat Fan Hui, the European Go champion, five times out of five in tournament conditions, the firm reveals in research published in Nature on 27 January. It also defeated its silicon-based rivals, winning 99.8% of games against the current best programs. The program has yet to play the Go equivalent of a world champion, but a match against South Korean professional Lee Sedol, considered by many to be the world’s strongest player, is scheduled for March. “We’re pretty confident,” says DeepMind co-founder Demis Hassabis.

“This is a really big result, it’s huge,” says RĂ©mi Coulom, a programmer in Lille, France, who designed a commercial Go program called Crazy Stone. He had thought computer mastery of the game was a decade away.

The IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. But AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games.

This is a current latest - I can imagine very soon that this won’t be just for extreme sports but also for police and military applications - and soon it will be available for everyone - extending the selfie to self-vid.
Fusar Mohawk turns helmets Into smart helmets
Fusar, a smart helmet kit, allows users to simply clip the main component, the Mohawk, to your existing helmet. The kit will be available for $549, whereas early adopters can buy it for just $349.

With only the Mohawk part, you can add an action camera, activity tracker, communication device, navigation unit, music player, black box and emergency alert system to your helmet. It uses a standard action camera mount and works on the side or at the top of your helmet.

And that’s it. Once it’s on your helmet, you control everything through Fusar’s app on your phone. For instance, you can start recording videos from the mobile app.
The Mohawk camera shoots 1080p videos or 12MP photos. But the best part is that you can use the HotShot feature to shoot a quick 15-second video and send directly to your social media accounts.

...there are other components on the Fusar kit as well. For example, if you don’t want to use your phone when riding a bike. Fusar also provides a button with LED indications for your handlebar or wrist. There’s also a communication headset so that you can chat with your buddies in walkie-talkie style.

Finally, the Fusar system tracks your activity with an accelerometer, magnetometer, gyroscope and GPS. You’ll not only get all your stats once you’re done with your ride, but it could also prove to be an essential tool thanks to the built-in crash detection system.

Now this is beginning to sound like we’re entering the future - one of my tests of having entered the future is when winter snow street plows leave all driveway entrances snowbank free. This could solve that and the driveway shoveling problem too.
Researchers develop electric concrete for quick snow melting
Engineers from the University of Nebraska have developed electric concrete to combat the icy road conditions that occur.

To develop the conductive concrete, UNL professor Chris Tuan added a pinch of steel shavings and a dash of carbon particles to a conventional concrete recipe. His newest ingredients constitute only 20% of the otherwise standard concrete mixture, but conduct enough electricity to melt ice and snow in the worst winter storms while remaining safe to the touch.

The team has been testing the technology for years, with the help of the Nebraska Department of Roads, on a 150-foot bridge. They found that the new conductive concrete could potentially save money, too. According to UNL, the power required to thermally de-ice the Roca Spur Bridge during a three-day storm typically costs about $250, which is several times less than a truckload of chemicals, he said.
Tuan said the conductive concrete could also prove feasible for high-traffic intersections, exit ramps, driveways and sidewalks.

The acceleration of renewable energy has one major barrier - energy storage - there are many potential solutions in progress - here is one breakthrough that may be very significant.
Researchers prove surprising chemistry inside a potential breakthrough battery
Lithium-air batteries hold the promise of storing electricity at up to five times the energy density of today's familiar lithium-ion batteries, but they have inherent shortcomings. Researchers at the University of Illinois at Chicago have helped prove that a new prototype is powered by a surprising chemical reaction that may solve the new battery's biggest drawback.
The findings are reported in the Jan. 11, 2016 issue of Nature ("A lithium–oxygen battery based on lithium superoxide").

Here’s another important milestone in the advance of energy-paradigm geo-politics.
There Are Now More Solar Jobs In America Than Oil Extraction Jobs
Unfortunately, oil still pays better.
Solar is the energy employer of the future -- or at least that's how the numbers look today.

A new report on the state of the solar industry out Tuesday from the nonprofit Solar Foundation shows that there were almost 209,000 people who worked in the solar industry as of November 2015. Of those jobs, 90 percent only work on solar-related projects, according to the report.

For comparison, there were only about 185,000 people working in oil and gas extraction in the United States in December 2015, according to the Bureau of Labor Statistics -- although when you add petroleum refining to the mix that total jumps by about 100,000 jobs. The full supersector of the economy, which the BLS calls natural resources and mining, employs about 785,000 people.

The oil industry has had a rough 18 months, as the price of oil slid from more than $100 a barrel in the spring of 2014 to just over $30 a barrel in recent weeks. The low price has caused layoffs in what had been a robust and growing shale oil extraction business.

The solar industry, meanwhile, continues to grow as the technology becomes cheaper, making it a better deal for the average household. The Solar Foundation's report also shows how the price of installed solar panels continue to drop:

The Economics of the Weird
This is a small niche of employment. But on thinking of this further - we could include all those people eeking out a living (even part-time) who are engaging in ‘CosPlay’ (think of the costumed ‘personalities’ that attend conferences like Comicon) or in Medieval Fairs, or Live Action Role Play (LARPs) events. It could be that the ecosystem of ‘costumed employment’ is growing even if the mer-being niche is small.
There are 1,000 men and women in the US working full-time as mermaids and mermen
The US Bureau of Labor Statistics will release the latest unemployment numbers on Friday. The breakdown of who is and isn't working won't include stats about the mermaid economy.

But according to Fast Company magazine, there are about 1,000 people working as full-time mermaids and mermen in the United States. Linden Wolbert is one of them. As a full-time working mermaid, Wolbert performs at parties, weddings, resorts and hotels. Her celebrity clients include the likes of Justin Timberlake and Jessica Alba.

“Being a mermaid professionally has been a very interesting path,” she says. “I started off doing this 10 years ago and created a business for which there was nothing to model after. I truly had to create it, build it, dream it, paint it and sew it. It has been challenging to say the least, but I’ve finally come to a place where I feel very content in what I’m doing — I know there’s nothing else that I could ever do. And it does pay me some nice sand dollars, and I’m very happy about that.”