Thursday, January 14, 2016

Friday Thinking 15 January 2016

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. 

In the 21st Century curiosity will SKILL the cat.

...biologically-based intelligence may constitute only a very brief phase in the evolution of complexity, followed by what futurists have dubbed the “singularity”—the dominance of artificial, inorganic intelligence. If this is indeed the case, most advanced species are likely not to be found on a planet's surface (where gravity is helpful for the emergence of biological life, but is otherwise a liability). But they probably must still be near a fuel supply, namely a star, because of energy considerations. Even if such intelligent machines were to transmit a signal, it would probably be unrecognizable and non-decodable to our relatively primitive organic brains.

This could perhaps explain the Fermi paradox. If this scenario holds true, our chances of detecting simple life via biosignatures may be far greater than those of discovering intelligent ET’s. Still, the ultimate goal of detecting the signature of an advanced intelligence, whether biological or nonbiological, remains the most intriguing option. All power to proposed projects for the 2020s such as Japan's Space Infrared Telescope for Cosmology and Astrophysics (SPICA) and NASA’s Far Infrared Surveyor.

The key point is that for the first time in human history, we are only two or three decades away from being able to actually answer the “Are we alone?” question. Because the answer may affect nothing less than our last claim for being special in the cosmos, its importance cannot be overemphasized.
If There Are Aliens Out There, Where Are They?
Alien life, if it exists, could be as simple as bacteria or more complex than humans—and there are optimal strategies for searching for both

Are we really going to wait for the Cable Guy to bring everyone in America the same modern internet access capacity at reasonable prices that other countries have had for years? Judging from the history of this industry, in which every DNA strand is encoded with monopoly genes (unaccompanied by any oversight), we’d have a better chance with Santa Claus.

We need a better plan — a better vision — if we want to unlock the full benefits that access to the internet can bring Americans. Right now, as a country, we’re investing inefficiently and in the wrong things at a time when we should be unleashing private capital to invest in smarter, faster, and cheaper ways.

[We] should invest only in technologies that can meet both our future and immediate needs. That means fiber optic networks, which not only can handle the advanced internet services of today, but also will be able to bring us internet applications that evolve over the next 30 years.

We should stop throwing money at old, copper-based technologies, like the ones Big Cable and the telcos rely on. Because of its limitations over long distances, copper can’t handle many current internet applications, like telemedicine, much less the applications coming in the next few years. And since much of the cost of a network lies in wiring the last mile to homes and businesses, we shouldn’t pay to dig up our roads to put in copper lines that will be obsolete almost as soon as they’re installed.

We should also stop thinking that satellite and mobile wireless technologies are the answer. They don’t provide the speeds and bandwidth we will need.
Big Cable Owns Internet Access. Here’s How to Change That.

The 130-year timeline of telephone innovation describes a relatively steady rise as the technology under the surface was continuously improved, with a handful of spikes for inventions such as the rotary dial, touch pad dialing, the fax machine, and, of course, 1959’s Princess phone. The changes were predictably predictable, the future rising in a fairly smooth incline.

But the timeline of innovation for the defining technology of our new age is barely a line at all: The Internet happens, and all hell breaks loose. The future no longer works the way we thought it did. The spikes become not just continual but frequently simultaneous and radically unpredictable. You couldn’t have foreseen Twitter, and if you had, you probably would have dismissed it as a dumb idea. I would have.

The telephone’s line of progress was a low incline with very few spikes because the telephone company was the only one allowed to innovate with telephony. The Internet’s progress is shaped like the lines coming out of a cartoon explosion because anyone can innovate, and then iterate on other people’s innovations. With sites like GitHub, which enables developers to easily build on the work of others, and Stack Overflow, where developers help one another over hurdles of every height, we are getting cascades of innovation.

If the fundamental purpose of a telephone is to allow people to talk, the fundamental purpose of the Internet is to allow people to innovate alone and together. That is, the purpose of the Net is to confound predictions. And it has been doing an excellent job of it.

We are stepping into a future that is new not just in what it contains but in our picture of how it works. The future seems less like the product of a clockwork’s relentless ticking than the result of uncountable tiny pieces, each simultaneously affecting every other in ways that cannot be fully understood afterward, much less predicted beforehand. Plus, some of those small pieces are on the Internet actively inventing new futures together.
The Future No Longer Works the Way We Thought It Did
And predictability is going through some unpredictable changes.

Blockchain technology (the secure distributed ledger software that underlies cryptocurrencies like Bitcoin) connotes the Internet II: the transfer of value, as a clear successor position to the Internet I: the transfer of information. This means that all human interaction regarding the transfer of value, including money, property, assets, obligations, and contracts could be instantiated in blockchains for quicker, easier, less costly, less risky, and more auditable execution. Blockchains could be a tracking register and inventory of all the world’s cash and assets. Orchestrating and moving assets with blockchains concerns both immediate and future transfer, whereby entire classes of industries, like the mortgage industry, might be outsourced to blockchain-based smart contracts, in an even more profound move to the automation economy. Smart contracts are radical as an implementation of self-operating artificial intelligence, and also through their parameter of blocktime, rendering time too to be assignable rather than fixed.

Blockchains thus qualify as the most important kind of news, the news of the new—something that changes our thinking; causes us to pause; where in a moment we instantly sense that now things might be forever different as a result.
Melanie Swan - Rethinking Authority With The Blockchain Crypto Enlightenment

Once considered forbiddingly abstruse and impossible for mere mortals to navigate, the Internet took off once it translated its complexities into the friendlier terms of the domain-name system and the World Wide Web. The network we use and enjoy today achieved its scope and fertility by giving participants more freedom, more choices, and more human variety than any of its closed-off predecessors — once-dominant businesses like AOL and Compuserve that had to adapt or wither away.

Now, a new wave of visionary technologists is betting that we can do the same thing all over again — this time, turning the admittedly complex mechanisms that make the digital currency Bitcoin work into a friendlier system that can fetch you a ride and book you a room while playing you some personalized music. The key to all of their dreams is called the blockchain. Today, it enables Bitcoin; tomorrow, it could be running your life.
Can an Arcane Crypto Ledger Replace Uber, Spotify and AirBnB?

This is another article from Susan Crawford - a serious Internet researcher and activist for universal access. Although this is written primarily for American citizens - it’s warnings and analysis are applicable everywhere. This is a must read - for anyone not convinced of the need to make Internet access a public infrastructure.
Big Cable Owns Internet Access. Here’s How to Change That.
Surveying the landscape of internet access, one could be forgiven for a single dank conclusion: Winter is coming.

We know that Big Cable’s plan for high-speed internet access is to squeeze us with “usage-based billing” and data caps, so as to milk ever-growing profits from their existing networks rather than invest in future-proof fiber optics. We are also seeing that Big Cable has won the war for high-capacity, 25Mbps-download-or-better wired internet access, leaving AT&T and Verizon to concentrate primarily on mobile wireless. Indeed, Big Cable’s share of new and existing wired-access subscribers has never been greater — cable got both, all new net subscribers in the third quarter of 2015 and captured millions of subscribers fleeing DSL — and its control over this market is growing faster than ever.

Wall Street analyst Craig Moffett predicts that, in the end, unless things change, cable will have 90 percent of subscribers in areas where it faces competition from only traditional DSL and will have the lion’s share of subscribers in areas where cable faces competition from souped-up copper-line DSL and fiber-to-the-node (aka “fiber to the neighborhood”).

It’s that time of year, predictions galore - but also it’s EDGE.ORG’s annual question posed to some of the world’s smartest, most creative individuals. This year the question is:
Scientific topics receiving prominent play in newspapers and magazines over the past several years include molecular biology, artificial intelligence, artificial life, chaos theory, massive parallelism, neural nets, the inflationary universe, fractals, complex adaptive systems, superstrings, biodiversity, nanotechnology, the human genome, expert systems, punctuated equilibrium, cellular automata, fuzzy logic, space biospheres, the Gaia hypothesis, virtual reality, cyberspace, and teraflop machines. ... Unlike previous intellectual pursuits, the achievements of the third culture are not the marginal disputes of a quarrelsome mandarin class: they will affect the lives of everybody on the planet.

You might think that the above list of topics is a preamble for the Edge Question 2016, but you would be wrong. It was a central point in my essay, "The Third Culture," published 25 years ago in The Los Angeles Times, 1991 (see below). The essay, a manifesto, was a collaborative effort, with input from Stephen Jay Gould, Murray Gell-Mann, Richard Dawkins, Daniel C. Dennett, Jared Diamond, Stuart Kauffman, Nicholas Humphrey, among other distinguished scientists and thinkers. It proclaimed:
The third culture consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.

"The wide appeal of the third-culture thinkers," I wrote, "is not due solely to their writing ability; what traditionally has been called 'science' has today become 'public culture.'Stewart Brand writes that 'Science is the only news. When you scan through a newspaper or magazine, all the human interest stuff is the same old he-said-she-said, the politics and economics the same sorry cyclic dramas, the fashions a pathetic illusion of newness, and even the technology is predictable if you know the science. Human nature doesn't change much; science does, and the change accrues, altering the world irreversibly.' We now live in a world in which the rate of change is the biggest change." Science has thus become a big story, if not the big story: news that will stay news.

This is evident by the continued relevance today of the scientific topics in the 1991 essay that were all in play before the Web, social media, mobile communications, deep learning, big data. Time for an update. …

This is a very significant development in the world of computation - and all the enabling applications that follow. This advance has a lot of potential for bringing a big leap in AI accessible to mobile and other devices.
Emergent Chip Vastly Accelerates Deep Neural Networks
Stanford University PhD candidate, Song Han, who works under advisor and networking pioneer, Dr. Bill Dally, responded in a most soft-spoken and thoughtful way to the question of whether the coupled software and hardware architecture he developed might change the world.

In fact, instead of answering the question directly, he pointed to the range of applications, both in the present and future, that will be driven by near real-time inference for complex deep neural networks—all a roundabout way of showing not just why what he is working toward is revolutionary, but why the missing pieces he is filling in have kept neural network-fed services at a relative constant.

There is one large barrier to that future Han considers imminent—one pushed by an existing range of neural network-driven applications powering all aspects of the consumer economy and, over time, the enterprise. And it’s less broadly technical than it is efficiency-driven. After all, considering the mode of service delivery of these applications, often lightweight, power-aware devices, how much computation can be effectively packed into the memory of such devices—and at what cost to battery life or overall power? Devices aside, these same concerns, at a grander level of scale, are even more pertinent at the datacenter where some bulk of the inference is handled.

The challenge for such applications is no longer so much one of developing neural networks capable of ever-exacting levels of accuracy, but rather, in fine tuning, pruning, and refining those trained neural nets so that they can very efficiently be processed and delivered to the end user. The capability to do so will have an immense impact on everything from the future of self-driving cars to a wider range of consumer-side applications that power “mobile first” companies like Baidu, for instance.

Using nine different deep neural network benchmarking suites, EIE performed inference operations anywhere (depending on the benchmark) between 13X and 189X faster over regular CPU and also GPU implementations, although this is without any compression. Still, however, consider the power envelope. As the benchmarks show, the energy efficiency is better by between 3,000X on a GPU and 24,000X on CPU.

This is a fascinating article that speaks to real-time processing of our behavioral data. A much better way to create conditions of learning as we do. “If you pay enough attention to the student as they learn, you don’t need to test them at the end
Hate exams? Now a computer can grade you by watching you learn
Artificial intelligence can predict a student’s ability to solve problems by looking at past performance – and help them learn better too
HOW do you show that you know what you know? Often, you have no choice but to take a test.

A new algorithm could both improve your knowledge and do away with formal tests altogether. Developed by researchers at Stanford University and Google in California, it analyses students’ performance on past practice problems, identifies where they tend to go wrong and forms a picture of their overall knowledge.

The idea of using software to track a student’s progress isn’t new. But few attempts so far have exploited deep learning, the cutting-edge discipline of making machines learn by digesting large amounts of data.

Chris Piech at Stanford and his team fed their system more than 1.4 million student answers to maths problems set on the online learning platform Khan Academy, and the corresponding scores. They also trained a neural network to sort questions by type: those involving square roots, the slope of graphs, or calculating where a line meets the horizontal axis on a graph, for example.
With all this information, the system then began to learn each student’s capabilities on each question type.

This is a brilliant idea and we should start at least in high school - developing some 21st Century literacies. This is potentially the Facebook killer - but giving citizens the space, knowledge and abilities for life in the digital environment - This is a MUST READ for anyone interested in Knowledge Management and Learning.
The Web We Need to Give Students
“Giving students their own digital domain is a radical act. It gives them the ability to work on the Web and with the Web.”
The Domain of One’s Own initiative at University of Mary Washington (UMW) is helping to recast the conversation about student data. Instead of focusing on protecting and restricting students’ Web presence, UMW helps them have more control over their scholarship, data, and digital identity.

The Domains initiative enables student to build the contemporary version of what Virginia Woolf in 1929 famously demanded in A Room of One’s Own — the necessity of a personal place to write. Today, UMW and a growing number of other schools believe that students need a proprietary online space in order to be intellectually productive.

As originally conceived at the Virginia liberal arts university, the Domains initiative provides students and faculty with their own Web domain. It isn’t simply a blog or a bit of Web space and storage at the school’s dot-edu, but their own domain — the dot com (or dot net, etc) of the student’s choosing. The school facilitates the purchase of the domain; it helps with installation of WordPress and other open source software; it offers both technical and instructional support; and it hosts the site until graduation when domain ownership is transferred to the student.

And then — contrary to what happens at most schools, where a student’s work exists only inside a learning management system and cannot be accessed once the semester is over — the domain and all its content are the student’s to take with them. It is, after all, their education, their intellectual development, their work.

Clarence Fisher introduced Domains last year to his high school students at the Joseph H. Kerr School in Snow Lake, Manitoba. “The kids came in to the class with what I would call fair and average teen tech skills,” he said. “Lots of iPods, iPads, and laptops. Lots of Facebook and Instagram. But none of them had a presence online they were in control of before this.

Learning and innovation - can we design our organizations for serendipity?
How to Cultivate the Art of Serendipity
DO some people have a special talent for serendipity? And if so, why?
Walpole suggested …. a crucial idea about human genius: “As their highnesses travelled, they were always making discoveries, by accident and sagacity, of things which they were not in quest of.” And he proposed a new word — “serendipity” — to describe this princely talent for detective work. At its birth, serendipity meant a skill rather than a random stroke of good fortune.

Dr. Erdelez agrees with that definition. She sees serendipity as something people do. In the mid-1990s, she began a study of about 100 people to find out how they created their own serendipity, or failed to do so.

Her qualitative data — from surveys and interviews — showed that the subjects fell into three distinct groups. Some she called “non-encounterers”; they saw through a tight focus, a kind of chink hole, and they tended to stick to their to-do lists when searching for information rather than wandering off into the margins. Other people were “occasional encounterers,” who stumbled into moments of serendipity now and then. Most interesting were the “super-encounterers,” who reported that happy surprises popped up wherever they looked. The super-encounterers loved to spend an afternoon hunting through, say, a Victorian journal on cattle breeding, in part, because they counted on finding treasures in the oddest places. In fact, they were so addicted to prospecting that they would find information for friends and colleagues.

You become a super-encounterer, according to Dr. Erdelez, in part because you believe that you are one — it helps to assume that you possess special powers of perception, like an invisible set of antennas, that will lead you to clues.

So how many big ideas emerge from spills, crashes, failed experiments and blind stabs? One survey of patent holders (the PatVal study of European inventors, published in 2005) found that an incredible 50 percent of patents resulted from what could be described as a serendipitous process. Thousands of survey respondents reported that their idea evolved when they were working on an unrelated project — and often when they weren’t even trying to invent anything. This is why we need to know far more about the habits that transform a mistake into a breakthrough.

Another way to encounter serendipity is the innovative use of frames and metaphors. This article provides a comprehensive discussion of visual metaphors. The visuals and discussion are worth it.
How To Think Visually Using Visual Analogies
Most research in cognitive science explores how we see things but little research is done on how we understand what we see.

Understanding is the ultimate test of how good your visualization is. So how can you make people understand? Show something familiar and analogize. If you know nothing else about visualization but pick the right analogy you are more than half way there. This is what a professional designer does - and there is no substitute for analogies.

How do you choose the right analogy? In this grid I organized analogies from the abstract down to the more detailed. I grouped them by similarity in shape. The goal is to enable you to quickly see the possibilities and “try them on” your information. With time you’ll be able to do all of this in your head. But for now this is a shortcut.
Let’s start simple and abstract.

This is a longish but interesting article on the potential of the blockchain to continue a disruption of centralized approaches to organization. The distributed platform may sound like an oxymoron - but with AI enabling the displacement of ‘apps’ with personal e-ssistants - we must seriously re-imagine our business models.
Can an Arcane Crypto Ledger Replace Uber, Spotify and AirBnB?
The technology underlying Bitcoin has inspired a new flavor of techno utopianism that could spell the end of centralized app services as we know them
….what if it’s not 2015 but, say, 2025, and you could instantly find, hire and pay providers of all those services without going through a company of any kind? What if access to those services, and many more like them, came baked into the network itself, like email or a Web page, protocol-to-protocol rather than company-to-company? And what if these relationships were all managed autonomously by high-order math running on distributed computing engines, beyond the control of any one individual or organization?

The vision of such a platform-less platform has been cherished in many pockets of alt-thinking, from the “every man an island” libertarians to the anarcho-bliss communal left. It has not dominated today’s technology industry, with its ubiquitous centralized platforms competing to provide free services and then monetize them selling ads or data (or taking a transaction cut). But it has one huge win under its belt: the rise of the Internet itself, where businesses, organizations, and individuals all connect and transact business with one another directly, without asking for permission or paying a monopoly-holder or begging for approval from an app-store manager.

A blockchain is a cryptographically protected shared database — a public ledger or journal that anyone (with the right skills and tools) can contribute to. Once information is entered on a blockchain, anyone can inspect it, and it’s nearly impossible to alter it. The most widely used blockchain today is the one that tracks Bitcoin transactions and keeps each unit of currency from being illicitly duplicated. That’s called “solving the double-spend problem,” and the blockchain, though not infallible, is a particularly elegant and effective technique. But there’s no reason that blockchains couldn’t be used for all kinds of other purposes — any situation that calls for an open public record where everyone can keep extending it into the future but no one can tamper with its past: think intellectual property rights, personal identity verification, real estate records, and so on.
Today there’s no shortage of startups, projects, and developers trying to apply the blockchain concept to everything. More than anything else, that’s because it gives the tech industry another bite at a long-coveted apple: decentralization.

This is a great article with some nice picture - related to Google’s ongoing project to bring us self-driving cars. The progress being made is very impressive as is the attention given to safety.
License to (Not) Drive
An exclusive look behind the scenes at Google’s autonomous car testing center
The former Castle Air Force Base sits on approximately 2,000 acres in Atwater, California, in Merced County. There was never a castle there; the name honors a downed World War II pilot. For many years, it was a home for long-range bombers with the Strategic Air Command. After the Cold War, the government decommissioned the base, opening it up to commercial purposes. In 2014, Google rented 60 acres — now expanded to around 100 — to test its self-driving cars (SDC) and train its drivers. Googlers refer to the facility, 2.5 hours from the company’s HQ, as simply “Castle.”

Mission control at Castle is a double-wide trailer that seems more like the op center at a construction site than a dispatch center for the future. There are desks, a ratty sofa, and instead of the high-end espresso maker commonly found at the company’s facilities, a coffeemaker that Joe DiMaggio would recognize. The most Googley objects are what look like military-grade water ordnance; they are actually Bug-a-Salt rifles that shoot pellets at the swarms of insects that are ubiquitous during the Central Valley summer.

I’m there for my own training—to find out what it’s like to not drive a car while sitting in the driver’s seat. When I heard that Google has a formal program to qualify people to operate cars that, at least aspirationally, don’t need anyone to operate them, I volunteered. Going through the whole program was out of the question — it takes four weeks, full-time, and god knows what liabilities might be involved. But the company decided that with a compressed lesson on the basics, it would be okay for me to putter around at its private testing grounds. As an afterthought, a rep asked me, “Uh, you are a good driver, right?”

I was in, and not just for the opportunity to sit in the front seat of Google’s car. In the process, I was rewarded with a very rare look at Google’s SDC industrial complex, a co-evolved infrastructure of hardware, software, drivers, and engineers that the company hopes will contribute to its argument that autonomous systems are the future.

And already drones comes to the rescue.
A Robot Life Preserver Goes to Work in the Greek Refugee Crisis
THE EUROPEAN REFUGEE crisis isn’t so much a crisis as it is a catastrophe. Fleeing violence in Africa and the Middle East, particularly Syria, more than a million migrants crossed by sea into Europe in 2015. Almost 4,000 of them lost their lives in the journey. The sea crossings can be especially dire, as leaky, unsafe boats capsize or break apart in rough water. In Greece the danger has proven massive, particularly off the island of Lesvos, which takes in an average of 2,000 refugees daily.

Every day around Lesvos the Coast Guard must rescue boats that have capsized, run out of fuel, or simply broken down. Which is why the Coast Guard invited a team from Texas A&M University’s Center for Robot-Assisted Search and Rescue to launch a pilot project this week for a very special robot—Emily, the Emergency Integrated Lifesaving Lanyard.

Think of Emily as a life preserver melded with a jet ski. It’s about four feet long and shaped like a pickle spear. An operator remotely controls the robot, tethered to a rope up to 2,000 feet long, to migrants struggling at sea. The victims take hold of the buoyant bot and a rescuer reels the line in. Quadcopter drones called Fotokites, themselves tethered on 30-foot ropes near the operators, pipe back an overhead view.

This is a great idea for all of us living in the lands of winter - not just for our homes but to make better lighter clothes to make us more comfortable in the outdoors as well.
It’s the dead of January, and your living room is so cold that the chill seeps into your bones, and you can barely feel your fingertips. You might crank up the thermostat. But soon you could be cozying up in a self-heating sweater.

Engineers at Stanford University have figured out how to coat clothing in a meshwork of silver nanowire so that it not only insulates better than regular clothes but also generates its own heat. And cheap versions could hit store racks in three years, predicts Yi Cui, an associate professor of materials science and engineering at Stanford who led the project, described in Nano Letters last November.

Cui and his colleagues came up with the idea for the fabric when considering that nearly half of the world’s energy consumption goes toward heating buildings — which contributes to up to a third of global greenhouse gas emissions. To reduce energy waste and keep heat indoors, most engineers have tried to boost the insulating properties of building materials. But the Stanford team turned its focus to keeping people warm. Regular clothing can do this by trapping heat, but it allows much of that heat to dissipate back into the surrounding air.

Because nanowires are so thin, only a little silver is needed. Benjamin Wiley, an assistant professor of chemistry at Duke University, calculates that the coating would add just 10 bucks to the cost of a shirt at most. Meanwhile, Cui estimated that wearing the material would save around 1,000 kilowatt-hours per person a year — about the amount of electricity that the average U.S. home uses every month.

Another material has been recently developed in the lab. Perhaps this also continues the process of assembling matter atoms by atoms.
Q-carbon Puts Diamonds in Second Place
Long ago, ancient scientists attempted to master the craft of alchemy, or the mythical process of turning lead into gold. Alchemy has since been proven to be a hopeless task, but modern scientists have successfully unlocked the secrets to an even more stunning transformation: turning carbon, the basic building block of life, into diamonds.

A new, simple carbon-transforming technique that uses a laser to produce tiny diamond “seeds” is yielding even more sparkling results. Researchers at North Carolina State University used a laser to craft the new hardest rock on the block, which they named Q-carbon. The novel substance possesses a host of useful properties, such as ferromagnetism, fluorescence and the ability to conduct electricity, making Q-carbon a potentially useful material for scientists and industrialists. In their findings, which were published this week in the Journal of Applied Physics, researchers estimate that Q-carbon is 60 percent harder than diamond, which is a result of tighter bonds between the atoms in Q-carbon’s structure.

To create the new substance, the researchers used a laser to deliver a quick, 200-nanosecond burst of energy to an amorphous (having no definite shape or form) carbon film, heating it to 6,740 degrees Fahrenheit. The laser jolt melted the carbon, which then cooled rapidly to form a crystal lattice structure. Depending on the energy levels and cooling period, the carbon would crystallize into either microscopic diamonds or Q-carbon. The cooling process is known as “quenching,” and it’s also the inspiration behind the carbon structure’s name. The process is also speedy, enabling the researchers to make a carat of diamonds in about 15 minutes.

And what new form of matter will become conceivable with the fundamental discoveries being developed?
Exotic ‘Four Neutron-No Proton’ Particle Confirmed
For the first time, researchers have confirmed the existence of a unique particle made up of four neutrons and no protons—the tetraneutron.
Following the recent addition of four new elements to the periodic table, nuclear physics made headlines again as Japanese physicists announced the creation and discovery of the long-elusive tetraneutron particle. This research was published in Physical Review Letters.

Neutrons are surprisingly elusive particles and subjects of serious theoretical interest. They overcome the repulsion between protons to ‘glue’ the atomic nucleus together, as physicists have known for nearly a century now, and are thus crucial to understanding nuclear dynamics.

However, the lone neutron is unstable and takes only about fifteen minutes to decay into a proton. Furthermore, neutrons are not affected by electric and magnetic fields as they are electrically neutral, making them even harder to manipulate in the lab. Despite these experimental difficulties, neutron-neutron interactions are of great theoretical importance.

“Both very large atomic nuclei (where neutrons outnumber protons about three to two, on average) and neutron stars contain large clumps of neutrons, whose behavior remains very poorly understood,” explains Professor Susumu Shimoura, from the University of Tokyo Graduate School of Science.

The looming turbulence is of deep paradigm change in the energy-driven geo-politics is now upon us - the ‘awakening’. :) Some may think that cheap oil will undermine solar and other renewable forms of energy. But cheap oil makes investment in oil production increasingly risky and even if oil prices go up than solar will do better (not just because the cost of solar is plummeting but because once installed solar is free). This makes money previously invested in oil more available to invest in solar - no matter what momentary ‘anti-rooftop’ attempts are made. The key lesson is that a focus on austerity and reductions won’t grow an enterprise, an industry or an economy.
As Oil Crashed, Renewables Attracted Record $329 Billion
The slump in oil prices that’s brought upheaval and cost-cutting to the traditional energy industry spared renewables such as solar and wind, which raked in a record $329.3 billion of investment last year.

The 4 percent increase in clean energy technology spending from 2014 reflected tumbling prices for photovoltaics and wind turbines as well as a few big financings for offshore wind farms on the drawing board for years, according to research from Bloomberg New Energy Finance released on Thursday.

“These figures are a stunning riposte to all those who expected clean energy investment to stall on falling oil and gas prices,” said Michael Liebreich, founder of the London-based research arm of Bloomberg LP. “They highlight the improving cost-competitiveness of solar and wind power.”

Why would energy incumbents resist the acceleration of distributed solar energy production? Are we ready to enable anyone to produce solar or wind energy?
Rooftop solar producing more energy than WA's biggest turbine
Rooftop solar panels in the South-West Interconnected System (SWIS) in Western Australia are now producing as much energy as the state's largest power turbine, according to research from Curtin University.
SWIS stretches from Kalbarri north of Perth to Ravensthorpe in the state's south, taking in the Perth metropolitan area.
Curtin University sustainability professor Peter Newman said 20 per cent of homes across the grid have rooftop solar panels installed.

"We are in the extraordinary position of saying that Perth [SWIS] now has rooftop solar as the largest supplier of electricity, it's the biggest power station in WA," he said.

"It's nearly 500 megawatts and it's growing rapidly, by 2020 we could have half of Perth's [SWIS] households with rooftop solar." SWIS, which includes coal, gas, wind and solar generation, has the capacity to produce 5,300 megawatts of power, but it only used roughly two thirds of that at its peak in 2014/2015.

Not including solar, coal makes up about 50 per cent of WA's energy production mix, while gas represents 42 per cent and wind 6.3 per cent.
Professor Newman said the state's electricity utilities needed to rapidly adapt to the growth in solar.

"They didn't predict it, they have all these contracts for coal and gas that go 20 or 30 years and they have even got an old power station out of mothballs, fixed it up, but never turned it on," he said. "Despite the boom times we actually reduced our power consumption during this period because people are just not needing it if you've got the PV's [photovoltaic] on the roof."

How soon will this sort of DNA testing be a first approach in a general medical exam?
Illumina’s Bid to Beat Cancer with DNA Tests
The world’s largest DNA sequencing company says it will form a new company to develop blood tests that cost $1,000 or less and can detect many types of cancer before symptoms arise.

Illumina, based in San Diego, said its blood tests should reach the market by 2019, and would be offered through doctors’ offices or possibly a network of testing centers.

The spin-off’s name, Grail, reflects surging expectations around new types of DNA tests that might do more to defeat cancer than the more than $90 billion spent each year by doctors and hospitals on cancer drugs. Illumina CEO Jay Flatley says he hopes the tests could be a “turning point in the war on cancer.”

The startup will be based in San Francisco and has raised more than $100 million from Illumina as well as Bill Gates, Jeff Bezos’s venture fund, Bezos Expeditions, and Arch Venture Partners. Illumina will retain majority control.

The testing concept being pursued by Illumina, sometimes called a “liquid biopsy,” is to use high-speed DNA sequencing machines to scour a person’s blood for fragments of DNA released by cancer cells. If DNA with cancer-causing mutations is present, it often indicates a tumor is already forming, even if it’s too small to cause symptoms or be seen on an imaging machine.

The use of AI and robotics will soon be everywhere for all sorts of reasons. Here is an interesting possibility.
Meet Leka, the vibrating 'social robot' designed to help children with autism learn new skills
  • Leka was unveiled at consumer technology show CES 2016 in Las Vegas
  • It provides sensory stimulation through lights, vibration and sound
  • The robot could help make learning and social interaction easier for children with autism and Down's syndrome, claim its makers
  • The company is currently raising funds on French website ' sowefund '
The role that technology plays in the treatment of children with autism has been known for some time, but now a company has developed a robot designed specifically for people with the condition.

Called Leka, the motion-sensitive bot lets children play learning games by providing sensory stimulation through movement, lights, vibration and sound.
Its makers have likened it to a 'guide dog' for children with the condition, helping them to navigate the challenges of learning and social interaction.

And here is another effort in this same direction.
The sociable robot IO (Intelligent Observer),whose goal is to facilitate the diagnosis and treatment of developmental disorders in children, has been selected in the top ten out of 200 proposals by FINODEX, an accelerator that funds and provides support services to projects from SMEs & Web Entrepreneurs building upon the FIWARE technology and reusing Open Data in the framework of the European Commission.

The robotic platform will assist in the diagnosis and treatment of autism, Asperger syndrome, specific language impairments and ADHD (Attention Deficit Hyperactivity Disorder) through the development of social skills. The robot will provide a voice interface using natural language, similar to the virtual assistants of the smartphones and tables. It will also include computer vision, being able to recognize people and objects, eye tracking and identify facial emotions.  

The robot IO is based on FIWARE technology, a platform development funded by the European Commission for the creation and deployment of services and applications in Internet. This new infrastructure includes the Internet of Things, Big data, Cloud hosting, Data mining, Interfaces for networks and devices (I2ND) and Security.

The main advantages of IO in front similar products will be its top level performances at an affordable cost. Furthermore, by means of Machine Learning techniques, insights obtained from the big data streams will provide the scientific community with very valuable data in the research of the developmental disorders.

No comments:

Post a Comment