Friday, December 25, 2015

Friday Thinking 25 December 2015

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. 

In the 21st Century curiosity will SKILL the cat.


...The shape of the regime dominated by Moore’s Law and Berners-Lee’s Web is still to be determined, and each one of us can influence the outcome by acting knowledgeably. In other words, literacy is pivotal. The new literacies we now need to navigate to a more humane and mindful future encompass the “media literacies” that educators and parents have discussed for decades, but also include skill sets far more powerful and potentially effective than knowledge of how to decode advertising, television, and news media. The whole idea of participatory culture is that when technologies enable billions to influence cultural production, rather than leaving the influences on what most people think and believe to the elites of the print and broadcast regimes, today’s aggregated actions of web publishers, YouTube video makers, Pinterest curators, dot-com entrepreneurs, connected learners and educators are shaping new ways of learning, socializing, educating, and governing.
Henry Jenkins on Participatory Media in a Networked Era


Today, more people communicate using text than calling on the phone. Many are also used to instantly sharing photos and videos via social networks.

By sharing observations, opinions and ideas and actively commenting on what others post online, smartphone owners are increasingly acting like journalists.
Citizens have been involved in such journalistic activity before, but the value on an individual level is increasing as more people participate as netiizens online.
Two thirds of smartphone owners say they share more information online now than ever before. For example, over 70 percent say they share personal photos regularly, and have an audience who see what’s been shared.
10 Hot Consumer Trends 2016


Markets allow people to vote with their dollars. Businesses must compete for these dollars/votes by offering a better price or higher quality product. That goes for food, housing, nearly anything. But in order for this voting power to work, people must have at least a minimum amount of money to vote with, and they must have the freedom to choose how to spend it.
Why Milton Friedman Supported a Guaranteed Income (5 Reasons)


By 2020, we could have 20 billion, 34 billion or maybe even 50 billion devices connected by IoT – depending on the analyst firm you ask. But the internet of things is ultimately not about “things”, it’s about service. In 2016, the analysts watching the IoT industry will shift their focus to put a premium on the quantifiable impact of new services (like revenue generated or new experiences enabled) instead of simply tallying device quantities. In fact, IDC added an amendment to its recent device count forecast to say that “by 2018, there will be 22 billion internet of things devices installed, driving the development of more than 200,000 new IoT apps and services”.

In 2015, GM announced that every one of its new vehicles in the United States would ship with 4G-LTE embedded – and the automaker already has 1 million connected cars on the road. In 2016, we’ll see this trend expand greatly and, for the first time, the majority of new vehicles produced in the United States will be connected. We won’t hit the same tipping point globally in 2016, but initial strong growth indicates that connected cars will be the norm for all new vehicles produced worldwide within the next couple of years.
5 predictions for the Internet of Things in 2016


We live in a world of algorithmic predominance: social media platforms that harvest our data; recommendation algorithms that offer up new products to buy based on past purchases; search algorithms that show us tailored results based on our profiles and location and predictive algorithms that influence our chances of getting a loan or shape how much we may pay for health insurance. Algorithms are increasingly becoming conduits through which we interact with others and with the Internet of Things; they shape how we define ourselves online and make countless known and unknown decisions for and about us.

What lurks within these ‘black boxes’? How can they be understood, governed and made accountable? These are challenging but significant political questions, which require urgent public debate. With the goal of fostering long-range thinking about this issue, we imagined a set of potential future scenarios for algorithmic accountability and governance.
Future Scenarios for algorithmic accountability and governance


… there is nothing at all about science and its proper pursuit that requires a high success rate or the likelihood of success, or the promise of any result. Indeed, in my view these things are an impediment to the best science, although I admit that they will get you along day to day. It seems to me we have simply switched the priorities. We have made the easy stuff—running experiments to fill in bits of the puzzle—the standard for judgment and relegated the creative, new ideas to that stuck drawer.

How will this change? It will happen when we cease, or at least reduce, our devotion to facts and collections of them, when we decide that science education is not a memorization marathon, when we—scientists and nonscientists—recognize that science is not a body of infallible work, of immutable laws and facts. When we once again recognize that science is a dynamic and difficult process and that most of what there is to know is still unknown. When we take the advice of Samuel Beckett and try to fail better.

How long will this take to change? I think it will require the kind of revolutionary change in our thinking about science comparable to what Thomas Kuhn famously, if perhaps a bit inaccurately, identified as a paradigm shift, a revolutionary change in perspective. However, it is my opinion that revolutionary changes often happen faster than “organic” changes. They may seem improbable or even impossible, but then once the first shot is fired, change occurs rapidly. I think of various human rights movements—from civil rights for black Americans to universal suffrage for women. Unthinkable at first, and then unfathomable as to what took so long. The sudden fall of the supposedly implacable Soviet Union is another example of how things can happen quickly when they involve a deep change in the way we think. I’m not sure what the trigger will be in the case of science, but I suspect it will have to do with education—an area that is ripe for a paradigmatic shift on many levels, and where the teaching of science and math is virtually a poster child for wrongheaded policies and practices.

Max Planck, when asked how often science changes and adopts new ideas, said, “with every funeral.” And for better or worse, they happen pretty regularly.
Why Scientists Need To Fail Better


Act now to avoid paralysis.As these cognitive technology activities demonstrate, artificial intelligence is no longer a pie-in-the-sky endeavor or an R&D initiative isolated from revenue growth or the customer relationship. Arguably, the ability to tactically execute was not broadly available across all the technology subsectors even a year ago. A product innovation approach that utilizes the traditional competitive advantage or “me too” approach to develop products would be an attempt to use old tools to deploy new technologies in a fundamentally transformed product distribution channel, and thus be unsuccessful. In this transformed environment, avoid paralysis and denial. The art of the possible is now. Begin by determining which markets and business priorities are immediately addressable, then apply them to formative cognitive-technology product-innovation value maps, technology roadmaps, and go-to-market strategies, while seeking out new product innovation toolsets.

Learn to manage the new, new thing: Risk. Change at this speed and scale creates opportunities—and risk—which will challenge strongly held beliefs at the core of a company’s business model. As a result, new value creation opportunities may exist at the edges, not at the core, of your organizational capabilities. Meanwhile, exponential change empowers new entrants while fundamentally challenging the cost structures and delivery models of incumbent companies. It helps technology companies to ask themselves: How is your organization formulating risk vis-a-vis cognitive technologies? And what are the new frameworks and methodologies for quantifying and mitigating risk when applying cognitive technologies to traditional business problems...
Cognitive technologies in the technology sector
From science fiction vision to real-world value


This could be a very big deal - the solving of the graph isomorphism problem. I don’t understand the math - but the implications are very substantial for computer science, analytics and mathematics. This may have significant impact on faster more reliable genetic analysis and especially on deeplearning or machine learning around pattern recognition.
Landmark Algorithm Breaks 30-Year Impasse
Computer scientists are abuzz over a fast new algorithm for solving one of the central problems in the field.
A theoretical computer scientist has presented an algorithm that is being hailed as a breakthrough in mapping the obscure terrain of complexity theory, which explores how hard computational problems are to solve. Last month, László Babai, of the University of Chicago, announced that he had come up with a new algorithm for the “graph isomorphism” problem, one of the most tantalizing mysteries in computer science. The new algorithm appears to be vastly more efficient than the previous best algorithm, which had held the record for more than 30 years. His paper became available today on the scientific preprint site arxiv.org, and he has also submitted it to the Association for Computing Machinery’s 48th Symposium on Theory of Computing.

For decades, the graph isomorphism problem has held a special status within complexity theory. While thousands of other computational problems have meekly succumbed to categorization as either hard or easy, graph isomorphism has defied classification. It seems easier than the hard problems, but harder than the easy problems, occupying a sort of no man’s land between these two domains. It is one of the two most famous problems in this strange gray area, said Scott Aaronson, a complexity theorist at the Massachusetts Institute of Technology. Now, he said, “it looks as if one of the two may have fallen.”

Babai’s announcement has electrified the theoretical computer science community. If his work proves correct, it will be “one of the big results of the decade, if not the last several decades,” said Joshua Grochow, a computer scientist at the Santa Fe Institute.
The paper can be found here: http://arxiv.org/abs/1512.03547v1
And a good discussion on the background of graph isomorphism
László Babai's Home Page with three video presentations


This is an very interesting 30 min video by Yaneer Bar Yam explaining complexity and how complexity sciences are well … becoming complex. It is worth the view.
Complex Systems Science: Where Does It Come From and Where is It Going To?
Yaneer Bar-Yam delivered the opening plenary address at the Conference on Complex Systems 2015, at Arizona State University in Tempe, Arizona.


This is an excellent MUST VIEW interview with 3 leading researchers of learning and participative culture today.
Henry Jenkins on Participatory Media in a Networked Era, Part 1
You are probably reading this because you are interested in the use of digital media in learning. My single strongest recommendation to you: if you want the best and latest evidence-based, authoritative, nuanced, critical knowledge about how digital media and networks are transforming not just learning but commercial media, citizen participation in democracy, and the everyday practices of young people, my advice is to obtain a copy of the new book, “Participatory Culture in A Networked Era,” by Henry Jenkins, Mizuko Ito, and danah boyd.

This book is the opposite of so much sound-bite generalization about “digital natives” and “Twitter revolutions.” Jenkins, Ito, and boyd seek to unpack the nuances behind the generalizations of digital media enthusiasts and critics alike, rather than to reduce them to easily digested phrases. And, they articulate their knowledge clearly. They not only know this subject matter as well as anyone on the planet, they know how to talk about it.

Unlike so much of the armchair theorizing and anecdotal hypothesizing about young people and their use of media, the thinking underlying this book was informed by the findings of a multi-year, multi-million dollar research project by dozens of University of California researchers who studied what youth from different regions and socio-economic strata actually do with Facebook and Snapchat, YouTube and Tumblr and what their parents and teachers think about youth media practices. Since the publication of that landmark research, the authors have extended their examination of these practices under the auspices of Microsoft Research, MIT, University of California, and USC’s Annenberg School.


For anyone interested in knowledge management and the link with democratic societies - this is a must hear - a 1 hour podcast interview with Harry Collins.
Knowledge and Democracy
Is there a direct connection between knowledge and democracy? What kind of knowledge is required to sustain a healthy democratic society? How can we guarantee a solid foundation for sound policies and social practices? Does democracy help or hinder scientific progress? Can science contribute to the evolution and maintenance of a healthy civil society? A recent Royal Society of Canada symposium, at Memorial University of Newfoundland, considered such questions, with a keynote address from Harry Collins.

Harry Collins, is a British sociologist of science at the School of Social Sciences, Cardiff University, Wales. In 2012 he was elected a Fellow of the British Academy. His best known book is The Golem: What You Should Know About Science; I would also most highly recommend “Tacit and Explicit Knowledge”.


Democracy, knowledge, and fair distribution are vital to a thriving society. This is an easy fast read - and is another signal of the real potential of an emerging new economic paradigm - one appropriate to near-zero marginal costs and the development of a collaborative commons.
Why Milton Friedman Supported a Guaranteed Income (5 Reasons)
Milton Friedman proposed to give everybody free money in his book “Capitalism and Freedom”. He mostly referred to this plan as a negative income tax (and he also referred to it as a guaranteed income because that’s exactly what it is.) But why would Milton Friedman, an outspoken free market capitalist, support giving people money for nothing? Here are 5 reasons backed by Friedman’s own words.


This is a very interesting 45 minute video by Keith Devlin (Stanford Mathematician) about math and games. “What is a Game?” - a game is a closed system, pursued for pleasure; in which players engage in an artificial learning, achievement, or conflict activity; defined and constrained by rules; in pursuit of an achievable, quantifiable goal. - This is what Devlin considers is what mathematics is and what he does and a mathematician. This is well worth the view - especially for anyone concerned with teaching math to children.
Learning Math Through Play
From the "Interactive Media & Games Seminar Series": A Stanford mathematician & Co-founder/Executive Director of the H-STAR Institute, Keith Devlin addresses insights of creating mathematics based learning video games gleaned from his work with the educational technology company BrainQuake.


And not only can we learn better through well designed games - but other cognitive benefits also arise.
Playing 3-D video games can boost memory formation, UCI study finds
Results suggest novel approaches to maintaining cognition as we age
Don’t put that controller down just yet. Playing three-dimensional video games – besides being lots of fun – can boost the formation of memories, according to University of California, Irvine neurobiologists.

Along with adding to the trove of research that shows these games can improve eye-hand coordination and reaction time, this finding shows the potential for novel virtual approaches to helping people who lose memory as they age or suffer from dementia. Study results appear Dec. 9 in The Journal of Neuroscience.

Unlike typical brain training programs, the professor of neurobiology & behavior pointed out, video games are not created with specific cognitive processes in mind but rather are designed to immerse users in the characters and adventure. They draw on many cognitive processes, including visual, spatial, emotional, motivational, attentional, critical thinking, problem-solving and working memory.

“It’s quite possible that by explicitly avoiding a narrow focus on a single … cognitive domain and by more closely paralleling natural experience, immersive video games may be better suited to provide enriching experiences that translate into functional gains,” Stark said.
A two min video of this here https://www.youtube.com/watch?v=t1YfgMVhhdA


Playing video games is not just a solitary endeavor and learning via games can enable a more collective learning - learning beyond what a single person could learn. This very short article is very interesting in suggesting the potential of future learning via computer mediated games.
3D video game innovation brings blind and sighted players together
A GROUND-BREAKING video game, designed so blind and sighted players can take part independently together, has just been launched by British start-up Audazzle.com.
Family-friendly JumpInSauceRS, the brainchild of software entrepreneur Selwyn Lloyd, is a space shooter game in the Space Invaders-style and can be played on all devices.

Lloyd developed it after his daughter Daisy, 15, lost her sight through eye cancer 11 years ago. She became increasingly left out from mainstream fun and when he could not find an inclusive game, he invented one.

JumpInSauceRS uses 3D-audio technology, which mimics how people hear sounds of the everyday world, in a new way to emphasise objects, so, for example, visually-impaired players know when to shoot.

“They use their ears to see where things are,” explains Lloyd, 47.
“When our audio is then played through headphones, it creates the illusion of sound coming from all round the listener. Despite advances in sound digital sound technology, the use of 3D in video games is still yet to be explored. We break the mould by incorporating both 3D-audio and computer graphics, everyone can join in.”


This is a very thoughtful piece - well worth the read for any parent concerned about age and access to the Internet by Danah Boyde
What If Social Media Becomes 16-Plus?
New battles concerning age of consent emerge in Europe.
At what age should children be allowed to access the internet without parental oversight? This is a hairy question that raises all sorts of issues about rights, freedoms, morality, skills, and cognitive capability. Cultural values also come into play full force on this one.

Consider, for example, that in the 1800s, the age of sexual (and marital) consent in the United States was between 10 and 12 (except Delaware, where it was seven). The age of consent in England was 12, and it’s still 14 in Germany. This is discomforting for many Western parents who can’t even fathom their 10- or 12-year-old being sexually mature. And so, over time, many countries have raised the age of sexual consent.

How can youth be protected from risks they cannot fully understand, such as the reputational risks associated with things going terribly awry? And what role should the state and parents have in protecting youth?

I don’t know what will come from this law, but it seems completely misguided. It won’t protect kids’ data. It won’t empower parents. It won’t enhance privacy. It won’t make people more knowledgeable about data abuses. It will irritate but not fundamentally harm U.S. companies. It will help vendors that offer age verification become rich. It will hinder EU companies’ ability to compete. But above all else, it will make teenagers’ lives more difficult, make vulnerable youth more vulnerable, and invite kids to be more deceptive. Is that really what we want?


The importance of Internet access for our children is more salient than for access to a Library.
The UK Council for Child Internet Safety (UKCCIS) just launched both a new industry report and a guide for parents on child safety online. Sonia Livingstone draws on her recent report ‘One in three: internet governance and children’s rights’ and discusses how internet governance needs to consider the specific rights and needs of children, both in terms of protection from harm as well as the right to access and use digital media. Sonia is Professor of Social Psychology at LSE’s Department of Media and Communications and has more than 25 years of experience in media research with a particular focus on children and young people. She is the lead investigator of the Parenting for a Digital Future research project.

According to the UN Convention on the Rights of the Child, children below the age of 18 possess the full range of human rights enjoyed by adults. As legal minors undergoing crucial processes of human development, they have additional rights too – to play, to parenting, to develop to their full potential, and so forth.


This is a must read - summary of the emerging Fourth Industrial Revolution by Klaus Schwab is Founder and Executive Chairman of the World Economic Forum
The Fourth Industrial Revolution: what it means and how to respond
We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before. We do not yet know just how it will unfold, but one thing is clear: the response to it must be integrated and comprehensive, involving all stakeholders of the global polity, from the public and private sectors to academia and civil society.

The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.


The future of the home, community and our ability to live relatively autonomously within them look promising for us baby boomers and the advent of the Internet of Things (IoT).
SENSORS LET ELDERLY ‘AGE IN PLACE’ FOR TWICE AS LONG
An independent living community with sensor technology and onsite care coordination successfully helps older adults “age in place,” report researchers.
A new study finds that residents at TigerPlace stayed longer than seniors in other senior housing across the nation. Additionally, residents who lived with sensors in their apartments stayed at TigerPlace the longest.

Length of stay is important because it indicates that residents’ health remains stable enough for them to continue living independently rather than transferring to an advanced-care facility or a hospital.

This technologically enhanced care coordination could serve as a cost-effective care model for improving the health and function of older adults whether they live in senior housing, assisted living, retirement communities, or their own homes.

“I knew we were increasing residents’ lengths of stay based on care coordination because of the positive outcomes we observed in several prior studies, and I thought the sensors also would have an impact,” says Marilyn Rantz, professor emerita in the University of Missouri Sinclair School of Nursing. “But to double length of stay based on care coordination and then to nearly double again based on adding sensors, to me, is huge. That is huge for consumers.

Comparing the cost of living at TigerPlace with the sensor technology versus living in a nursing home reveals potential savings of about $30,000 per person. Potential cost savings to Medicaid-funded nursing homes, assuming the technology and care coordination are reimbursed, are estimated to be about $87,000 per person.”


And sensors are advancing about as fast as Moore’s Law.
CHEAP POLARIZER DETECTS HIV AND EBOLA IN MINUTES
A new device uses polarized light to quickly diagnose a host of diseases, including malaria, HIV, and Ebola.

The technology is based on birefringence, the ability of substances to change the polarization state of light.

Here’s how it works: Place a drop of blood on a special carrier substance. After a few minutes, place the slide on a device that emits polarized light—thanks to an inexpensive polarization filter. The device is covered with a lid containing a second polarization filter, which blocks the light from all materials except those that are crystalline, or materials with directional properties.

Here’s how it works: Place a drop of blood on a special carrier substance. After a few minutes, place the slide on a device that emits polarized light—thanks to an inexpensive polarization filter. The device is covered with a lid containing a second polarization filter, which blocks the light from all materials except those that are crystalline, or materials with directional properties.
“Our test system can be extended to a large number of different viruses or bacteria. It is totally flexible,”


The domestication of DNA and CRISPR - here’s a very short story with a link to the original paper.
Gene editing saves baby girl
Gene editing has been used for the first time to successfully treat a human patient – a baby girl with leukaemia – after conventional treatments failed.

Re-writing genes is no longer science fiction thanks to gene-editing technologies called CRISPR and TALENs. Scientists can now alter any gene in any animal and have been using these techniques to cure genetic diseases like cystic fibrosis and muscular dystrophy in animal models. This year, Chinese scientists even edited genes in human embryos in a very controversial study.

Doctors in London were given special permission to trial genetically engineered T cells in Layla Richards because conventional treatments could not cure her leukaemia. Layla didn’t have enough healthy T cells of her own, so the scientists used ‘off-the-shelf’ T-cells called UCART19 cells.

Several months after Layla was treated with 1ml of UCART19 cells she is still in remission.


This is an interesting, short article that provides hints about the looming disruption and transformation of the the automobile industry (auto as in autonomous?). The next 10-15 years may well see not only a new paradigm of transportation but completely new players displacing chief incumbents of the last half century.
Why This Chinese Electric Vehicle Startup Hired a Leader with No Auto Experience
A little known Chinese company called NextEV plans to take on Tesla and other electric vehicle startups by bolstering its technology leadership.
The news that Padmasree Warrior, a former technology chief at Cisco and Motorola, was hired this week as CEO of the U.S. division of the Chinese company NextEV raised an obvious question: what exactly is NextEV?

The simple answer is that it’s an early stage electric car company based in Shanghai, China, with locations in San Jose, Beijing, Hong Kong, London, and Munich. What it plans to become remains something of a mystery.

Warrior is an interesting addition to the NextEv leadership team. A respected technology executive who worked at Motorola for 23 years and Cisco for the past seven, she has no prior experience in the auto industry. She will have to quickly add automobile design, manufacturing, battery technology, retailing, and servicing to her resume. “What I bring is access to advanced technology,” says Warrior, who left Cisco in June. “And how to apply that to the automotive experience.”

The list of new electric car companies, many of them backed by Chinese tech entrepreneurs, now includes Faraday Future, Atieva, NextEV, and Karma (formerly Fisker). Several of them, like NextEV, are planning to launch in the highly competitive U.S. market—and are attempting to distinguish themselves not just by offering a battery-powered electric vehicle, but by using technology like wireless connectivity, car sharing, and autonomous driving as a hook.


This is a brilliant innovation that we should all be considering in the next five years - the creation of local energy commons.
Community Solar Brings Renewable Energy 'To The Masses'
For the three-quarters of U.S. households that can't install their own rooftop solar, a powerful alternative is emerging.
Solar panels aren’t just for Arizonans living in sprawling ranch houses anymore.

Homeowners who lack adequate roof space or who enjoy the shade of big trees -- even condo owners and renters such as Joe and Vanessa Goldberg of notoriously rainy Seattle -- are now teaming up with their neighbors to buy electricity from shared solar power projects.

"Because we rent, we don't really have the option of putting solar on our house," said environmentally conscious Joe, 35, who once made a local move using only bike trailers.

Like a growing number of Americans, the Goldbergs decided to invest in a community solar project. Solar-paneled picnic shelters in their neighborhood's Jefferson Park feed the local electricity grid. The couple purchased two of the solar units, and now receive credits on their electric bills for their portion of the solar power produced.


The acceleration of advance in AI will need more talent to keep up the pace. This is good news and bad news for Canada.
Google Raids Threaten Canada's Lead in Artificial Intelligence
With a tech industry one-third the size of California’s, Canada has confounded expectations by becoming a leader in the booming market for artificial intelligence. Pioneering technologies developed in Canadian labs can be found in Facebook’s facial recognition algorithms, Google’s Photos app, smartphone voice recognition and even Japanese robots.

Now Canada risks losing its AI edge to Silicon Valley.
Several leading Canadian researchers and professors have defected to U.S. tech companies such as Google. Top U.S. universities are scooping up AI experts, too. They’re in hot demand because the emerging technology will underpin the next wave of innovation -- from self-driving cars and personal assistants to smarter prosthetic limbs and industrial robots.

“We are losing our top talent, the talent at every level,” says Ajay Agrawal, a professor at the University of Toronto’s Rotman School of Management. “While we had that advantage, it is slipping through our fingers.”


New sensor technology can expand our biological senses. The images are worth the view.
New camera makes methane visible
Researchers in Sweden have developed a new camera that can visualise the flow of methane – a key greenhouse gas – as it emanates from its source. In one piece of footage the team has shown a plume of methane spilling from a window in a barn housing cows. The camera, which uses a technique called optimal infrared hyperspectral imaging, can also quantify the amount of gas being produced and could be useful in producing accurate audits of the sources of methane.

Despite methane’s environmental importance, our knowledge about how it is cycled remains incomplete. Satellite observations can detect methane at the global and regional scales, and individual measurements can provide data at point sources, but information about methane fluxes at the scale of metres has so far been missing.

Now, Magnus Gålfalk of the University of Linköping University and colleagues have plugged this gap. The new camera relies on the unique infrared (IR) absorption or emission profiles of different gases. Molecules will absorb or emit IR depending on whether the background is hotter or colder than the gas. The pattern of absorption or emission lines can be used as fingerprints of chemical compounds. The camera records the data at a high frequency, making many thousands of measurements over a period of seconds, which allows it to track the movement of the gas over time.

‘In this way we do not just detect the source, but we also see the flux,’ says Gålfalk. ‘When we looked at a cow barn, initially we were focused on a pile of manure outside, but then in the corner of the image we noticed a large amount of methane coming from of a ventilation outlet – which gave us a bit of a surprise.’


Another step forward in AI is in sense making of our own stories - and perhaps by extrapolation - we can imagine our e-ssistants understanding the stories of our own lives - making the mythic pattern entangled within our personal journeys. Or the complex stories behind our mental health.
Now AI Machines Are Learning to Understand Stories
Face and speech recognition is now child’s play for the most advanced AI machines. But understanding stories is much harder. That looks set to change.
Artificial-intelligence techniques are taking the world by storm. Last year, Google’s DeepMind research team unveiled a machine that had taught itself to play arcade video games. Earlier this year, a team of Chinese researchers demonstrated a face-recognition system that outperforms humans, and last week, the Chinese internet giant Baidu revealed a single speech-recognition system capable of transcribing both English and Mandarin Chinese.

Two factors have made this possible. The first is a better understanding of many-layered neural networks and how to fine-tune them for specific tasks. The second is the creation of the vast databases necessary to train these networks.
These databases are hugely important. For face recognition, for example, a neural network needs to see many thousands of real-world images in which faces from all angles, sometimes occluded, are clearly labeled. That requires many hours of human annotation but this is now possible thanks to crowdsourcing techniques and Web services such as Amazon’s Mechanical Turk.

The rapid progress in this area means that much of the low-hanging fruit is being quickly cleaned up—face recognition, object recognition, speech recognition, and so on. However, it is much harder to create databases for more complex reasoning tasks, such as understanding stories.


This is an interesting article discussion potential alternatives to ‘deep learning’ based AI systems. It also provides a short accessible history of the development of AI.
Can This Man Make AI More Human?
One cognitive scientist thinks the leading approach to machine learning can be improved by ideas gleaned from studying children.
Gary Marcus has a very different perspective from many of the computer scientists and mathematicians now at the forefront of artificial intelligence. He has spent decades studying the way the human mind works and how children learn new skills such as language and musicality. This has led him to believe that if researchers want to create truly sophisticated artificial intelligence—something that readily learns about the world—they must take cues from the way toddlers pick up new concepts and generalize. And that’s one of the big inspirations for his new company, which he’s running while on a year’s leave from NYU. With its radical approach to machine learning, Geometric Intelligence aims to create algorithms for use in an AI that can learn in new and better ways.


And another advance in AI. This one when applied dynamically could capture attention on what important then perhaps lets us forget what is not.
Deep-learning algorithm predicts photos’ memorability at “near-human” levels
Future versions of an algorithm from the Computer Science and Artificial Intelligence Lab could help with teaching, marketing, and memory improvement.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created an algorithm that can predict how memorable or forgettable an image is almost as accurately as humans — and they plan to turn it into an app that subtly tweaks photos to make them more memorable.

For each photo, the “MemNet” algorithm — which you can try out online by uploading your own photos — also creates a heat map that identifies exactly which parts of the image are most memorable.

“Understanding memorability can help us make systems to capture the most important information, or, conversely, to store information that humans will most likely forget,” says CSAIL graduate student Aditya Khosla, who was lead author on a related paper. “It’s like having an instant focus group that tells you how likely it is that someone will remember a visual message.”


Another article about the improvement in deeplearning approach to understanding natural speech.
Baidu’s Deep-Learning System is better at English and Mandarin Speech Recognition than most people
The new system, called Deep Speech 2, is especially significant in how it relies entirely on machine learning for translation. Whereas older voice-recognition systems include many handcrafted components to aid audio processing and transcription, the Baidu system learned to recognize words from scratch, simply by listening to thousands of hours of transcribed audio.

The technology relies on a powerful technique known as deep learning, which involves training a very large multilayered virtual network of neurons to recognize patterns in vast quantities of data. The Baidu app for smartphones lets users search by voice, and also includes a voice-controlled personal assistant called Duer. Voice queries are more popular in China because it is more time-consuming to input text, and because some people do not know how to use Pinyin, the phonetic system for transcribing Mandarin using Latin characters.

In developing Deep Speech 2, Baidu also created new hardware architecture for deep learning that runs seven times faster than the previous version. Deep learning usually relies on graphics processors, because these are good for the intensive parallel computations involved.


Here’s a report from  Ericsson ConsumerLab on next year’s trends.
10 Hot Consumer Trends 2016
This report presents insights from various studies that have been conducted by Ericsson ConsumerLab during 2015. The broadest trend in this report is representative of 1.1 billion people across 24 countries, whereas the narrowest trend is representative of 46 million urban smartphone users in 10 major cities. Source information is given separately for data used on each page.

... there are three important underlying shifts that set the scene for how these trends should be interpreted.
All consumer trends involve the internet Today, almost all consumer trends involve the internet, since many aspects of our physical lives are merging with our online habits: shopping, working, socializing, watching TV, studying, traveling, listening to music, eating and exercising are just a few examples. This is happening because we use mobile broadband or Wi-Fi, rather than cables. We may not be physically more mobile, but our online activities are less restricted by our surroundings than ever before.
Early adopters are less important Four years ago, Ericsson ConsumerLab said that women drive the smartphone market by defining mass-market use. But as the speed of technology adoption increases, mass-market use becomes the norm much quicker than before. Successful new products and services now reach the mass market in only a matter of years. This means that the time period when early adopters influence others is shorter than before. Since new products and services increasingly use the internet, mass markets are not only faster, but are also more important than ever to consumers themselves. Most internet services become more valuable to individuals when many use them.
Consumers have more influence Only a few years ago, there was a lot of focus on how the internet is influencing consumers. However, now consumers are using the internet to influence what goes on around them. For some time, prosumption – consumers participating in the production process – was mostly limited to user-generated media content. However, online user reviews, opinion sharing, petitions and instant crowd activities are now becoming the norm more than an exception. Although not all online activity is carried out by engaged consumers and some may even be classed as ‘slacktivism’ (lethargic, one-click internet activism), it is still perceived to have real effect. With such a large part of the world’s population now online, it is clear that there is strength in numbers.


This is a longish but excellent and even-handed discussion about what genetically modified organism means and if it can be usefully defined.
It’s practically impossible to define “GMOs”
Debates rage over what to do about genetically modified organisms, but we rarely stop to ask a more basic question: Do GMOs really exist? It’s an important question, because no one in this debate can tell you precisely what a GMO is. I’ve come to the conclusion that “GMO” is a cultural construct. It’s a metaphor we use to talk about a set of ideas. It doesn’t map neatly onto any clear category in the physical world.

GMOs, like other cultural constructs — think of gender, or race — do have a basis in reality, of course: We can roughly define “male” or “Asian,” but when we try to regulate these divisions, all kinds of problems crop up. And definitions of “GMOs” are much messier — “nerd” might be a roughly equivalent category. You know what a nerd is, but things would break down fast if you were required to label and regulate all the nerds. The definition of a nerd depends on the context; it depends on who’s asking. Same with GMOs.

As one researcher put it, “It is theoretically and practically impossible to precisely specify a supposed common denominator for all these [GMO] products.”

This causes real problems. People argue about GMOs because they are worried about safety, or environmental integrity, or human rights. But because the category is so porous, any policy governing “GMOs” — whether encouraging or discouraging them — can work directly against those values.

By now you are probably protesting that you’ve seen definitions that work well enough. (I’ve doubtless taken a stab at providing one at some point.) But let’s zoom in to take a look at how well those definitions fare in practice.


For Fun
Maybe lots of people have already seen this site by Google - but it always interesting to review what people have been very curious about in the last year. Also listed are the top lists.
A Year in Search 2015‬‬
Explore the year's biggest moments and the questions they inspired.