In the 21st Century curiosity will SKILL the cat.
1-5 YEAR FUTURE
There is no part of the in-person sports experience that won’t be radically impacted by advances in technology. Safety, security, transportation, and connection to the on-field action are all evolving. For the stadium owner, the difficult question is this: With technology advancing so quickly, how do you know where and when to place your bets? Late adopters get left behind, and, as ESPN's recent failure with 3-D broadcasting demonstrated, early adopters often get burned.
The answer is to invest in technology that is proven but still has plenty of room for innovation. For the next five to 10 years, that technology is undoubtedly the smartphone and the growing number of connected devices. The adoption question is settled: There are now more mobile devices than there are people on the planet. However, we are just beginning to understand how they will be put to use. Fans already expect the ability to tweet, upload pictures, view video, and access their social networks, and stadiums are working hard to keep up with the demand for data flow.
5-10 YEAR
Video walls built into stadium architecture
On-field holographic replays
Glasses-free 3D technology in luxury boxes
“We are entering a time of commercial and policy confusion for sports broadcast and advertising. There will be a proliferation of niche channels and digital sports media. The supplementation, augmentation, or replacement of broadcast sports is a generational time bomb.”
10-25 YEAR
THE CONVERGENCE
The very word “broadcast” disappears from the popular lexicon. Fans will not only be accessing sports content from multiple places simultaneously, they will begin to integrate these streams into increasingly seamless, coherent, and personalized viewer experiences. These multi-layered viewing experiences may themselves be packaged and sold from fans to other fans. Depending on which friend’s house you visit to watch the game, your experience will be radically different.
You might watch an entire game through a virtual reality headset from the perspective of your favorite quarterback while your friend in another town watches the game from the point of view of a key offensive lineman of the other team. You share your comments and highlights in real time. Don’t like the outcome of a play? Simply click over to a multiplayer video game that can reset the exact conditions that you just watched and run the play your way.
How will innovations in biological science change the fairness, safety, and meaning of sports?
Professional and scholastic sports will essentially become like NASCAR, with the human body regulated the way stock cars are. The often-hypocritical stigma against self-optimization in sports will disappear as the ability to improve one’s own genetic makeup goes mainstream, thanks to gene-editing technologies like CRISPR. Safe and detectable drugs that boost key physiological factors to specific, pre-determined amounts will be legal and will level the playing field for all. Success will be determined more by character, teamwork, strategy, and the mental edge than by the genetic lottery. In this sense, sports will become a purer test than we have today.
“Genetic engineering techniques are now cheap and widespread enough that any knowledgeable individual can order every material they need off the Internet and download the software to do their own experiments on themselves. Everyone in citizen science and the biohacker community has stories of being contacted by trainers, coaches, and athletes. There is absolutely no way to regulate it, and if you tried to it would be like stemming the tide with a fork.”
THE FUTURE OF SPORTS
we have shifted into a new economy where we don’t know at the outset of some new activity who are the best people to get involved, how the project will proceed, or what the risks are. And the industrial era organization is principally a hindrance, not a support.
The factory logic of mass production forced people to come to where the machines were. In knowledge work, the machines are where the people are. The Internet is also the first communication environment that decentralizes the financial capital requirements of production. Much of the capital is not only distributed, but owned by the workers, the individuals, who themselves own the smart devices, the new machines of work. The logic of ubiquitous communication makes it possible for the first time to cooperate with willing and able people, no matter where on the globe they may be. Knowledge work is not about jobs, but about tasks and interaction between interdependent people. You don’t need to be present in a factory, or an office, but you need to connect with, and be present for other people.
The architecture of work is not the structure of a firm, but the structure of the network. The organization is not a given hierarchy, but an ongoing process of responsive organizing. The main motivation of work may not even be financial self-interest, but people’s different and yet, complementary expectations of the future.
Our present system of industrial management creates systemic inefficiency in knowledge-based work. The approach that managers do the coordination for the workers is just too slow and too costly in the low transaction cost environments we live in today. The enablers have turned into a constraint. The existence of high transaction costs outside of firms lead to the emergence of the firm as we know it, and management as we know it.
The biggest problem is that we still believe that the unit of work is the independent individual. Self-organization is then thought to mean a form of empowerment, or a do-whatever-you-like environment, in which anybody can choose freely what to do. But connected, interdependent people can never simply do what they like. If this happened, they would very soon be excluded. The relational view of social complexity means that all individuals constrain and enable each other all the time. We co-create our reality and the common narrative.
Esko Kilpi on the Architecture of Work
This article is aimed at the situation in the US - but it’s pertinent in most developed countries today - especially if we want to re-imagine the role of journalism and transparency in goverance.
What If We Built a C-SPAN on Steroids?
Newspapers are collapsing, statehouse coverage is on the wane and lobbyists are quietly filling the gap. Here’s a solution.
In her recently released book, Dark Money, Jane Mayer painstakingly traces the startlingly successful efforts by Charles and David Koch and their conservative allies to use their billions to shape American policies. Mayer’s work pays special attention to state-level politics, and for good reason: For years, groups like ALEC, the State Policy Network, and (more recently) the Franklin Center for Government and Public Integrity have been focused on nullifying any progressive national policy making through state legislation.
These groups look like they’re conduits for bottom-up, grassroots expressions of discontent with the role of government in American lives, but according to Mayer their money comes “from giant, multinational corporations, including Koch Industries, the Reynolds American and Altria tobacco companies, Microsoft, Comcast, AT&T, Verizon, GlaxoSmithKline, and Kraft Foods” — and the draft bills, “news” stories, and opinion papers generated by these groups are aimed squarely at promoting state policies that will bolster the bottom lines of those companies.
Whether or not you agree with the overall policy goals of the Koch brothers, we have a democracy problem: At the same time that state legislative activity has gained in importance, the number of traditional news reporters covering statehouses has plummeted. According to numbers from the American Journalism Review and the Pew Research Center, less than a third of U.S. newspapers have a reporter present at the statehouse (either full- or part-time) and almost no local television stations assign a reporter to state politics.
Net result: the public’s awareness of and access to the activities of state government is vanishing, at the same time that the decisions made by state-level actors are having greater effects on American lives.
The first step towards righting this asymmetry is access, and there’s a good idea out there you need to know about: State Civic Networks are state-based, non-profit, independent, nonpartisan, “citizen engagement” online centers, and they should exist in every state. (Think C-SPAN, but way better, and focusing on statehouses.)
Clay Shirky brilliantly noted that the world doesn’t need newspapers - it needs journalism. This was a vital distinction clarifying that traditional media do not necessarily embody the purpose of an institution of journalism - something that also we must re-imagine for the 21st Century. The graph is worth the view - to illustrate just how precipitous revenue declines have been.
All the News That's Affordable to Print
New York Times revenue in 2005: $3.4 billion -- last year: $1.7 billion
Good news, the New York Times had a higher profit in 2015 than it did in 2014. That extra profit did not come from an increase in revenue as digital finally offset the momentous decline in print—how great would that be???—but from the Times’ now-longstanding source of profit-making: cutting costs faster than revenue falls. Here’s a chart of the last ten years of Times annual revenues against total operating costs and operating profits
Even with lofty goals like $800 million in digital revenue by 2020, the Times is already preparing the newsroom for more severe cuts, with the announcement of a top-to-bottom sweep of the newsroom in search of “further areas for cost reductions,” led by executive editor Dean Baquet and Upshot editor David Leonhardt:
“The simple fact is that to secure economic success and the viability of our journalism in the long term, the company has to look for judicious savings everywhere, and that includes the newsroom,” Mr. Baquet said. … “Instead of cuts and additions without a clear picture of where we are headed, we want to approach the task thoughtfully, with our mission and values clearly in mind. Everything we do must either be part of that mission or help generate the revenue to sustain our journalistic dominance.”
In sort of the same line of thought - here’s some interesting observations from Internet scholar Danah Boyd commenting on her experience at the recent World Economic Forum in Davos. Her observation are interesting because they help articulate some of the uncertainties that loom ahead of us.
It’s not Cyberspace anymore.
It’s been 20 years — 20 years!? — since John Perry Barlow wrote “A Declaration of the Independence of Cyberspace” as a rant in response to the government and corporate leaders who descend on a certain snowy resort town each year as part of the World Economic Forum (WEF). Picture that pamphleteering with me for a moment…
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone.
I first read Barlow’s declaration when I was 18 years old. I was in high school and in love with the Internet. His manifesto spoke to me. It was a proclamation of freedom, a critique of the status quo, a love letter to the Internet that we all wanted to exist. I didn’t know why he was in Davos, Switzerland, nor did I understand the political conversation he was engaging in. All I knew is that he was on my side.
Twenty years after Barlow declared cyberspace independent, I myself was in Davos for the WEF annual meeting. The Fourth Industrial Revolution was the theme this year, and a big part of me was giddy to go, curious about how such powerful people would grapple with questions introduced by technology.
What I heard left me conflicted and confused. In fact, I have never been made to feel more nervous and uncomfortable by the tech sector than I did at Davos this year.
It’s generally assumed that we can’t do geo-political experiments - but this article explores a natural experiment that is worth considering.
Epic Country-Level A/B Test Proves Open Is Better Than Closed
The Soviet Union’s demise set Estonia and Belarus on opposite paths at the same moment in history. 25 years on, the results are clear.
Rarely do countries and societies have the opportunity to make a simple, binary choice about whether they are going to be open or closed. But that is exactly what happened after the dissolution of the Soviet Union and the reestablished independence of Estonia and Belarus. The two countries are separated by just a few hundred kilometers west of Russia, but their trajectories could not be more different.
Estonia is “The Little Country That Could,” the title of a book by the first prime minister of Estonia, Mart Laar, which explained the country’s rise from ruin at the end of Soviet occupation in 1991 to become one of the most innovative societies in the world today.
Following Estonia’s independence with the collapse of the Soviet Union, its economy was left reeling. Everyday life for most of the country’s citizens was dire. Its currency was stripped of any value. Shops were empty and food was rationed. The gas shortage was so bad that the tattered government planned to evacuate the capital of Tallinn to the countryside. Industrial production dropped in 1992 by more than 30 percent, a higher decline than America suffered during the Great Depression. Inflation skyrocketed to over 1,000 percent, and fuel costs soared by 10,000 percent. The only system left working was the informal market, which, along with weak legal protections and unprotected borders, facilitated an increase in organized crime for Estonia and its neighbors. This was happening right around the time that Silicon Valley was about to take off with the advent of the commercial Internet.
Open nations thrive - but all are experiencing a change in the conditions of change - and the concept of a new economy is no longer simply hype - even if we don’t quite know what the new economic paradigm is. One things that continues to be discussed is a basic platform-safety-net for everyone.
A Universal Basic Income Is The Bipartisan Solution To Poverty We've Been Waiting For
What if the government simply paid everyone enough so that no one was poor? It's an insane idea that's gaining an unlikely alliance of supporters.
There's a simple way to end poverty: the government just gives everyone enough money, so nobody is poor. No ifs, buts, conditions, or tests. Everyone gets the minimum they need to survive, even if they already have plenty.
This, in essence, is "universal minimum income" or "guaranteed basic income"—where, instead of multiple income assistance programs, we have just one: a single payment to all citizens, regardless of background, gender, or race. It's a policy idea that sounds crazy at first, but actually begins to make sense when you consider some recent trends.
The first is that work isn't what it used to be. Many people now struggle through a 50-hour week and still don't have enough to live on. There are many reasons for this—including the heartlessness of employers and the weakness of unions—but it's a fact. Work no longer pays. The wages of most American workers have stagnated or declined since the 1970s. About 25% of workers (including 40% of those in restaurants and food service) now need public assistance to top up what they earn.
The important thing is to create a floor on which people can start building some security.
This is a vision of both the smart city (think self-driving cars as the ‘new mass-transit’) and the sustainable human-centric architected city. The speed of transformation is also worth noting.
How Ljubljana Turned Itself Into Europe's 'Green Capital'
A booming, car-free downtown and underground waste disposal facilities show how a small city can lead the way on sustainability.
LJUBLJANA, Slovenia—Along the narrow streets on the banks of the Ljubljanica River, the only sounds you’re likely to hear are the patter of shoes on cobblestones, the voices of people out walking and the clanking of glasses at sidewalk cafes.
It’s much changed from 10 years ago, when these streets were clogged with traffic. There was little room for pedestrians then. Those who dared to walk had to dodge cars and buses and breathe fumes from their tailpipes.
Now, not just these riverside streets but all of Ljubljana’s compact core is essentially car-free. Only pedestrians, bicycles, and buses are allowed; an electric taxi service called Kavilir offers the elderly, disabled or mothers with children free rides. If you live in the center or want to drive there, you must park your vehicle at an underground garage just outside the car-free area and walk from there. Fears that this would kill local businesses never came to pass. If anything, business and tourism have increased in the historic center.
Ljubljana’s successful fight against traffic is one reason the European Commission named the city European Green Capital for 2016. That’s a title that has frequently gone to acknowledged leaders of the debate on urban sustainability, wealthy cities such as Copenhagen, Stockholm or Hamburg. The choice of Slovenia’s small capital shows that cities of modest size and means have lessons to offer, too. It also shows that smaller cities can make a staggering amount of change happen in a short period of time.
European cities such as Brussels, Madrid, and Oslo have announced plans to close parts of their city centers to traffic. Ljubljana offers a contained laboratory—it takes only 15 minutes to walk from one side of the car-free area to the other—to see where such policies lead. For example, the city built the underground garage near the center, and plans to build more. The city also is creating a park-and-ride system where commuters can leave their cars; the cost of parking includes a return bus ticket to the center.
New ways to provide ubiquitous internet are being developed - too bad they are not by-for-of the public.
Google plans to beam 5G internet from solar drones
Project Skybender is experimenting with millimeter-wave radio transmissions.
Google has a new top secret project by the same team that brought us Project Loon, according to The Guardian. It's called Project Skybender, and it aims to deliver 5G internet from solar drones. Mountain View has reportedly begun experimenting with millimeter wave-based internet in Virgin Galactic's Gateway to Space terminal at Spaceport America in New Mexico. Millimeter waves are believed to be capable of transmitting data 40 times faster than LTE and could become the technology behind 5G internet. DARPA began working on an internet connection based on it for remote military bases in 2012.
University of Washington professor Jacques Rudell told The Guardian that "[t]he huge advantage of millimetre wave is access to new spectrum because the existing cellphone spectrum is overcrowded. It's packed and there's nowhere else to go." The problem with millimeter wave transmissions, though, is that they fade after a short distance and can't compare to a mobile phone signal's range. That's likely one of the issue's Google is trying to solve if it aims to beam internet from the sky.
Project Skybender is currently using an "optionally piloted aircraft (OPA)" called Centaur and a solar-powered drone called Solara 50 made by Titan Aerospace, which the Big G snapped up in 2014, for its tests. Google has permission from the FCC to continue testing the drone-internet system in New Mexico until July. We'll most likely hear more details as its development progresses, the same way that Google regularly announces the latest details about Project Loon.
This may be of interest to anyone with a home phone line.
Driving Robocallers Crazy With the Jolly Roger Bot
Robocalls are among the more annoying modern inventions, and consumers and businesses have tried just about every strategy for defeating them over the years, with little success. But one man has come up with a bot of his own that sends robocallers into a maddening hall of mirrors designed to frustrate them into surrender.
The bot is called the Jolly Roger Telephone Company, and it’s the work of Roger Anderson, a veteran of the phone industry himself who had grown tired of the repeated harassment from telemarketers and robocallers. Anderson started out by building a system that sat in front of his home landlines and would tell human callers to press a key to ring through to his actual phone line; robocallers were routed directly to an answering system. He would then white-list the numbers of humans who got through.
“Now, I figured telemarketers would just press a button, so I also programmed it to send me an email that included links to set the behavior next time this same caller-id called. I could (1) send this caller to a disconnect message, (2) let them through to the house, or (3) challenge them again next time,” Anderson said in a post detailing the system.
Here is an interesting 18 min Youtube interview about a creating a competitor to Uber. This is a ‘weak signal’ for the disruption of disruption - that decentralized blockchain can enable as well as a weak signal of potential for new form of decentralized truly collaborative organizations. This also highlights the importance of coding literacy for creation of capabilities in the digital environment. It also highligts the looming need for digital infrastructure as a public utility - including the possibility of public clouds - as a new form commons. The name ‘Arcade City’ is a suggestion of the emerging smart city.
Uber Competitor 'Arcade City' to Launch Valentine's Day -- Cryptocurrency Accepted
The former 'illegal Uber driver' Christopher David -- who provides rides to Portsmouth, New Hampshire residents when cab companies will not -- has garnered 1,600+ drivers to launch a competing app on February 14th. It's called Arcade City.
Here’s a great article listing some key brilliant researchers in AI.
AI is coming on fast: Here are the 10 people the AI elite follow on Twitter
To say Artificial Intelligence is moving into key leadership positions in institutions around the world is only a slight exaggeration.
Google just replaced its retiring long-time head of all search technology with its leader of Artificial Intelligence and Machine Learning. Microsoft just spent a quarter billion dollars acquiring an AI-informed user interface technology (SwiftKey). Both of those things just happened yesterday. Venture investments in AI nearly doubled in the last 3 months.
We’re going to hear more and more about AI in the coming months and years. How fast is it growing? Is General human-level AI realistic in the near term (vs narrow, job-specific AI)? What would the consequences of such a development be? Is this thing or that thing really AI technology, or is it just a shiny squirrel wrapping itself in the tech trend of the day? Should a machine be required to get human permission before deciding to use lethal force? Will most humans have jobs in the mid-term future?
These questions are important to all businesses (and humans). But where can you go to efficiently keep up on this trend, find credible perspectives on the news, and discover important opportunities for your business? (Free bonus: maybe even get an early heads-up on the robot apocalypse.)
Below you’ll find a Top 10 list: the top ten people in AI that you should know. These are the people that the very most selective experts in AI pay attention to.
One more domain for Watson to bring enhanced cognition and analysis - will it replace humans? Not all of them - but who ever remains will be having to access Watson’s enhanced memory and analysis capabilities.
IBM’s Automated Radiologist Can Read Images and Medical Records
Software that can read medical images and written health records could help radiologists work faster and more accurately.
Most smart software in use today specializes on one type of data, be that interpreting text or guessing at the content of photos. Software in development at IBM has to do all those at once. It’s in training to become a radiologist’s assistant.
The software is code-named Avicenna, after an 11th century philosopher who wrote an influential medical encyclopedia. It can identify anatomical features and abnormalities in medical images such as CT scans, and also draws on text and other data in a patient’s medical record to suggest possible diagnoses and treatments.
Avicenna is intended to be used by cardiologists and radiologists to speed up their work and reduce errors, and is currently specialized to cardiology and breast radiology. It is currently being tested and tuned up using anonymized medical images and records. But Tanveer Syeda-Mahmood, a researcher at IBM’s Almaden research lab near San Jose, California, and chief scientist on the project, says that her team and others in the company are already getting ready to start testing the software outside the lab on large volumes of real patient data. “We’re getting into preparations for commercialization,” she says.
Avicenna “looks” at medical images using a suite of different image-processing algorithms with different specialties. Some have been trained to judge how far down a patient’s chest a CT scan slice is from, for example. Others can identify the organs or label abnormalities such as blood clots. Some of these image components use a technique called deep learning, which has recently produced major leaps in the accuracy of image recognition software (see “AI Advances Make It Possible to Search, Shop with Images”).
Thinking of solar energy harnessed in the desert, and near oceans to cheaply desalinate water - one can envision a terra forming project that enables the blossoming of the desert. Here is one potential key to such a grand vision.
Prize-winning technology to make the desert bloom
http://www.theengineer.co.uk/prize-winning-technology-to-make-the-desert-bloom/?cmpid=tenews_1866392
Line in the sand: New technology could transform poor-quality sandy soils into high-yield agricultural land.
Through a combination of climate change, drought, overgrazing and other human activities, desertification across the world is on the march. It’s a process defined by the UN as “land degradation in arid, semi-arid and dry sub-humid regions”. Given that around 40 per cent of the Earth’s land surface is occupied by drylands – home to around two billion people – the potential for desertification to impact the planet is huge. A recent report from the Economics of Land Degradation Initiative claimed that it’s a problem costing the world as much as US$10.6tn every year – approximately 17 per cent of global gross domestic product.
The refugee crisis in Europe has highlighted the difficulties that arise when large numbers of people migrate. However, the numbers arriving from countries such as Syria, Lebanon and Eritrea pale in comparison to those that could be forced into exile by changing climate conditions. According to the UN’s Convention to Combat Desertification (UNCCD), the process could displace as many as 50 million people over the next decade.
But one Norwegian start-up is developing a technology to wage a frontline battle with desertification. Desert Control is a Norwegian company set up by Kristian and Ole Morten Olesen, alongside chief operating officer Andreas Julseth. It was recently awarded first prize at ClimateLaunchpad, a clean-tech business competition that attracted more than 700 entries from 28 countries across Europe. The product that earned Desert Control top honours was Liquid NanoClay, a mixture of water and clay that is mixed in a patented process and used to transform sandy desert soils into fertile ground.
According to Desert Control, virgin desert soils treated with Liquid NanoClay produced a yield four times greater than untreated land, using the same amount of seeds and fertiliser, and less than half the amount of water. It found that Liquid NanoClay acts as a catalyst for Mycorrhizal fungi when nourishment is available, with the fungi responsible for the increased yield.
This is a longish Nature article - an important discussion about a new theory of life - a must read for anyone interested in the convergences of science related to the origin and definition of what life is or maybe more precisely what artificial life could be.
Possible applications aside, active matter excites scientists because it so closely resembles the most complex self-organizing systems known: living organisms.
The physics of life
From flocking birds to swarming molecules, physicists are seeking to understand 'active matter' — and looking for a fundamental theory of the living world.
First, Zvonimir Dogic and his students took microtubules — threadlike proteins that make up part of the cell's internal 'cytoskeleton' — and mixed them with kinesins, motor proteins that travel along these threads like trains on a track. Then the researchers suspended droplets of this cocktail in oil and supplied it with the molecular fuel known as adenosine triphosphate (ATP).
To the team's surprise and delight, the molecules organized themselves into large-scale patterns that swirled on each droplet's surface. Bundles of microtubules linked by the proteins moved together “like a person crowd-surfing at a concert”, says Dogic, a physicist at Brandeis University in Waltham, Massachusetts.
With these experiments, published in 2012, Dogic's team created a new kind of liquid crystal. Unlike the molecules in standard liquid-crystal displays, which passively form patterns in response to electric fields, Dogic's components were active. They propelled themselves, taking energy from their environment — in this case, from ATP. And they formed patterns spontaneously, thanks to the collective behaviour of thousands of units moving independently.
These are the hallmarks of systems that physicists call active matter, which have become a major subject of research in the past few years. Examples abound in the natural world — among them the leaderless but coherent flocking of birds and the flowing, structure-forming cytoskeletons of cells. They are increasingly being made in the laboratory: investigators have synthesized active matter using both biological building blocks such as microtubules, and synthetic components including micrometre-scale, light-sensitive plastic 'swimmers' that form structures when someone turns on a lamp. Production of peer-reviewed papers with 'active matter' in the title or abstract has increased from less than 10 per year a decade ago to almost 70 last year, and several international workshops have been held on the topic in the past year.
More and more the sciences of medical intervention are sounding like science fiction. What also comes to mind is a whole new cosmetic surgery -height implants.
Injectable foam repairs bones
With 70% of bone consisting of a calcium phosphate mineral called hydroxyapatite, calcium phosphate cements (CPCs) are widely used in surgery as bone substitutes. CPCs have excellent properties for the job – they are injectable, biocompatible, self-setting and microporous allowing nutrients to flow throughout the implant site, which assists bone regeneration.
However, it’s been a challenge introducing macroporosity into such injectable CPCs, which is desirable to facilitate quicker bone regeneration and reinforce cancellous bone; the flexible and spongy tissue that degenerates with osteoporosis. Macroporous CPCs do exist but those that are injectable have poor mechanical properties.
Now Pierre Weiss and colleagues at the University of Nantes have created a macroporous self-setting CPC in the form of an injectable foam by using a silanised-hydroxypropyl methylcellulose (Si-HPMC) hydrogel as a foaming agent. ‘Our approach is simple and gives us really good results in terms of mechanical properties and macroporous structures,’ says Weiss.
And more - brain research and/or brain-computer interfaces.
In a brain, dissolvable electronics monitor health and then vanish
The new transient sensors harmlessly melt away after wirelessly transmitting info.
From the folds and crinkles of a living brain, a fleeting fleck of electronics smaller than a grain of rice can wirelessly relay critical health information and then gently fade away.
The transient sensors, which can measure pressure, temperature, pH, motion, flow, and potentially specific biomolecules, stand to permanently improve patient care, researchers said. With a wireless, dissolving sensor, doctors could ditch the old versions that require tethering patients to medical equipment and performing invasive surgery to remove, which adds risks of infections and complications to already vulnerable patients.
Though the first version, reported in Nature, was designed for the brain and tested in the noggins of living rats, the authors think the sensors could be used in many tissues and organs for a variety of patients—from car crash victims with brain injuries to people with diabetes. “Sensors are incredibly important,” chief resident of neurosurgery and study co-author Rory Murphy of Washington University School of Medicine told Ars. But they’ve been a hassle, too.
To develop the wireless, dissolving model, Murphy and colleagues teamed up with John Rogers’ group at the University of Illinois at Urbana-Champaign, which specializes in wearable and implantable electronics. The wee gadgets they came up with generally contain biodegradable silicon-based piezoresistive sensors, which change their electrical resistance with slight bending, surrounded by more silicon, magnesium, and a dissolvable copolymer, poly(lactic-co-glycolic acid) (PLGA), which is already used in medical devices.
Essentially, the sensors are made of elements and minerals that we already eat and drink, Murphy said.
Sensor and nanobots are looming ever closer to our lives as well-being prosthetics and capability enhancers.
This Tiny Robot Team Could Help Stop the No. 1 Killer in America
International team of scientists is working on nanobots that could unclog arteries.
This year a few mice are set to become the first patients for a brand-new kind of heart disease treatment.
It’s a surgery being performed by tiny microsurgeons. The surgeons, called nanorobots, are really tiny groups of magnetically charged particles that band together to break up clogged arteries.
The robot molecules work on blockages in two stages. First they deliver drugs that help soften clogged arteries. Then they charge into battle, drilling in to blast heart blockages apart.
Biomedical engineer MinJun Kim, a professor at Drexel University, is part of the international team of scientists from the U.S., Switzerland, and South Korea who are working on the tech. He says the robots are controlled by harnessing the power of magnetic resonance imaging (MRI), the tunnel-like machines more commonly used for X-ray imaging in hospitals. Working with the nanobots, the MRI machines can serve as a kind of command and control center: both steering and observing the magnetically charged bots as they navigate their way around inside the body.
This is another breakthrough in combinatronics - we can expect even more and better celestial images o the horizon.
LOCKHEED MARTIN'S NEW SPIDER IMAGING TECH COULD LEAD TO SMALLER, MORE POWERFUL TELESCOPES
Lockheed Martin this week unveiled new technology that may revolutionize the telescope as we know it. Called Segmented Planar Imaging Detector for Electro-optical Reconnaissance (or SPIDER), the technology is based on the technique of interferometry that uses a thin array of tiny lenses to collect light instead of the large bulky lenses used in existing telescope technology.
SPIDER does away with the large telescopes and complex mirror-based optics and replaces them with thousands of tiny lenses that are connected to PICs: silicon-chip photonic integrated circuits. According to Lockheed Martin, these PICs combine the light in pairs to form interference fringes, which are then measured and used to create a digital image. This technology allows Lockheed Martin to increase the resolution of a telescope while maintaining a thin disk shape.
Unlike conventional telescopes that take years to construct, Lockheed Martin’s PICs are easy to produce using a laser printing process that takes only a few weeks. Because they rely on integrated circuits, they also do not contain large lenses and mirrors that need to be polished and aligned. This characteristic simplifies their deployment as technicians do not have to worried about misalignment in orbit.
SPIDER also offers up to 99% savings in size and weight when compared to traditional telescopes and are both energy efficient and easily scalable in size. This flexibility makes its possible for Lockheed Martin to move beyond the cylindrical telescope and create squares, hexagons, and even conformal concepts.
Lockheed is developing this technology using funding from the Defense Advanced Research Projects Agency (DARPA) and assistance from researchers at University of California, Davis. The SPIDER design is still in the early stages of development, with five to ten years of work required before the technology is matured and ready for widespread deployment. Besides its use in space-based applications where the small size and light payload of SPIDER is highly desirable, the technology also could be used for safety sensors in automobiles, military reconnaissance, and even targeting instrumentation in aircraft, helicopters, and boats.
This is an interesting article - suggesting that this coming year will balance the breakthroughs of last year concerning renewable energy with energy storage.
From liquid air to supercapacitors, energy storage is finally poised for a breakthrough
Banks of batteries and other technologies could lower energy bills and help renewable power, says energy storage industry as it gears up for bumper year
It doesn’t always rain when you need water, so we have reservoirs - but we don’t have the same system for electricity,” says Jill Cainey, director of the UK’s Electricity Storage Network.
But that may change in 2016, with industry figures predicting a breakthrough year for a technology not only seen as vital to the large-scale rollout of renewable energy, but also offering the prospect of lowering customers’ energy bills.
Big batteries, whose costs are plunging, are leading the way. But a host of other technologies, from existing schemes like splitting water to create hydrogen,compressing air in underground caverns, flywheels and heated gravel pits, to longer term bets like supercapacitors and superconducting magnets, are also jostling for position.
In the UK, the first plant to store electricity by squashing air into a liquid is due to open in March, while the first steps have been taken towards a virtual power station comprised of a network of home batteries.
“We think this will be a breakthrough year,” says John Prendergast at RES, a UK company that has 80MW of lithium-ion battery storage operational across the world and six times more in development, including its first UK project at a solar park near Glastonbury. “All this only works if it reduces costs for consumers and we think it does,” he says.
And another article outlining the looming transformation of energy geo-politics.
US electricity industry's use of coal fell to historic low in 2015 as plants closed
The biggest source of climate pollution dropped to 34% of US electricity generation and co-author of a new report says: ‘These are permanent changes’
America’s use of coal for electricity dropped to its lowest point in the historical record in 2015, delivering a new blow to an industry already in painful decline.
The dirtiest of fossil fuels and America’s biggest single source of climate pollution, coal accounted for just 34% of US electricity generation last year, according to the Sustainable Energy in America handbook on Thursday.
It was the smallest share for coal in the electricity mix since 1949, the first year in which Energy Information Administration records were kept.
“It was a really big year,” said Colleen Regan, a power analyst for Bloomberg New Energy Finance, who was co-author of the report for the Business Council on Sustainable Energy. “It was a landmark year with a long-term trajectory that we saw as the US decarbonising its power fleet.”
Coal made up 39% of electricity supply in 2014, the annual report said.
The changes in the US electricity system last year also produced milestone benefits for the climate, the report found.
Greenhouse gas emissions from the power sector – the largest single source of climate pollution and the target of Barack Obama’s clean power plan – fell 18% below 2005 levels last year, the report found.
What is a true narrative of the future? Humans with advanced technologies as advanced technologies themselves. What is the human equivalence of emotion in and as advanced technologies?
All TV science fiction projects today’s human tomorrow - because tomorrow’s human is beyond apprehension by today’s human.
What rules of thumb, heuristics, memory, and pattern re-cognition become available to a mind extended by Google, Watson and other assemblages of AI accessing a sensorium of IoT generating peta-data and more?
The Ancients personified all their Gods - because personification was the only patterns they could re-cognize emotionally - are we moderns more capable of re-cognizing pattern-gods - attractors of emergent meta-consciousness?
At some point the cost of remain Sapiens2 1.0 became too much - in time, effort and opportunity. But it took a very long while to reach the ontological threshold where we could overcome the mourning of loss - of letting emotion 1.0 go.
john verdon - after watching an episode of the sci-fi TV “The Expanse”