Instead of predictive process flows, creative work follows a different logic. Work is about community-based cognitive presence. But cognition is just part of the answer. Work tomorrow will be even more about social presence. To work and to manage is to participate in live conversations. A dramatic shift is needed in the mental framework of information, communication and work. Without this changing mindset, no efficient digital transformations can be made in the corporate world. Work is communication. Conversations and narratives are the new documents.
The first face of digital transformation is about new ways to be present and new ways to communicate
You cannot design live interaction. Conversations cannot be controlled. The only way to influence conversations is to take part in them. You cannot plan in the traditional sense of specifying a structure or a process and then implementing it. As many have experienced, communities seldom grow beyond the group that initiated the conversation, because they fail to attract enough participants. Many business communities also fall apart soon after their launch because they don’t have the energy to sustain themselves.
Communities, unlike business units need to continuously invite the interaction that makes them alive.
Community design is closer to iterative, creative learning than to traditional organizational design. Live communities reflect and redesign themselves throughout their life cycle. This is why design should always start with very light structures and very few elements.
What is also different is that a good community architecture invites many kinds of participation. We used to think that we should encourage all the community members to participate equally. Now we know that a large number of the network members are, and should be, peripheral. In a traditional meeting we would consider this type of participation half-hearted, but in a network a large percentage of the members are always peripheral and rarely contribute. Because the boundaries of a live community are always fluid, even those on the outer edges can become involved for a time as the focus shifts to their area of particular interest.
Because conversations and communities need to be alive to create value, we need an approach to management that appreciates passion, relationships and voluntary participation. Rather than focusing on accountability, community design should concentrate on energizing, enriching participation.
The new structures and new designs are about communities continuously organizing themselves around shared contexts, meaning shared interests and shared practices. The focus of industrial management was on the division of labor and the design of vertical/horizontal communication channels. The focus should now be on cooperation and emergent interaction based on transparency, interdependence and responsiveness.
ESKO KILPI ON INTERACTIVE VALUE CREATION
The two faces of digital transformation
Think of This! - from Henry Mintzberg - only 22 elections in most democratic countries - till the end of the 21st Century. Do we really have the time to diddle around with 'gridlock'?
Today is the day to challenge the farce
Exciting news this week from the Gang of 7 in a Bavarian castle: the world will be freed of fossil fuels by the end of the century! Politicians who can’t keep a promise from one election to the next expect us to believe that they will keep a promise for 2099. (For good measure, they threw in—yet again, if my memory serves me well—an end to extreme poverty and hunger, this by 2030.)
Do you know how many elections will be held by 2099? 22 in each of the democratic countries. What if some other party comes to power, say in 2049, and decides it prefers 2199? Promises are cheap, you may have noticed, and so are plans. Our future deserves better than that, by what we do today.
Breakthrough? Groupthink? or Camouflage?
Some of these G7 people may believe they have orchestrated a breakthrough. I believe they have joined a groupthink. Some are so concerned about global warming—or at least about what they have to tell their children back home—that they must have gone to Germany ready to believe in almost anything that looks good, so long as it inconveniences no-one. Hence they all lined up to save our world by 2099. Will there be anything left to save?
The famous Meeker Report is out. This is a must view for anyone interested in he current state of the Internet. There’s a 30 min video of her presentation. In her slide presentation the section on emerging apps for enterprises are an imperative for anyone wanting to see - where the future of transaction and information management is headed.
Mary Meeker Takes You on a Tour of the 2015 Internet Trends Report
Here are the highlights:
Meeker on the big picture: “So what’s happened since 1995? Internet penetration globally has gone from one percent to 39 percent. Mobile phone user penetration has gone from 1 percent to 73 percent. Public Internet company market capitalizations have gone from $17 billion in 1995 to $2.4 trillion today. User control of content has grown significantly.” But she says recent jumps have been less sharp in Internet user growth and smartphone subscription growth: “The incremental new users for Internet smartphones are harder to get because of where we are in the adoption cycle.”
On mobile ads: “I for one am really excited about Vessel’s five-second ad. Short-form video, making a point in five seconds, I think is a beautiful thing.” (How about making 50 kajillion points in 30 minutes?) Meeker says a promising up-and-coming feature is “buy buttons,” allowing direct purchase from places like Google, Facebook and Twitter: “We’ll look back two years [from now] and be surprised about just how pervasive they’ve become.”
On Slack, Square, Stripe, Domo, DocuSign, Intercom, Gainsight, Directly, Zenefits, Anaplan, Greenhouse, Checkr, GuideSpark and Envoy: “Enterprise computing is being reimagined one business process at a time.” Meeker observed that today’s enterprise founders are often working on relieving pain points they experienced at their prior companies.
The on-demand economy and beyond: “The average American spends 33 percent of their income on housing, 18 percent on transportation and 14 percent on food. The average individual needs shelter every day, drives 37 miles a day and visits a grocery store twice a week. These are high size and spending, high engagement markets that also have traditionally weak user experiences that we think are being reimagined.”
On bubbles and valuations: “The one rule of thumb is that very few companies will win, and those that do, win big.”
On diversity, which was Meeker’s final “one more thing” of the presentation, and a particularly acute issue at Kleiner Perkins Caufield & Byers these days: “Diversity matters. It’s just good business.”
Here’s Meeker’s whole slide presentation - http://www.kpcb.com/internet-trends
Following Mary Meeker’s report - here’s some interesting predictions for the next decade - it’s a short article worth the read.
7 Top Futurists Make Some Pretty Surprising Predictions About What The Next Decade Will Bring
From smartphone apps that can do seemingly everything to driverless cars and eerily humanlike robots, the past decade has seen dramatic advances in science and technology. What amazing advances are we likely to see in the next 10 years?
To find out, HuffPost Science reached out to seven top futurists -- and they gave us some pretty surprising predictions. Keep reading to learn more.
Futurists include:
Dr. Michio Kaku, professor of theoretical physics at the City University of New York
Dr. Ray Kurzweil, inventor, pioneering computer scientist, and director of engineering at Google
Dr. Anne Lise Kjaer, founder of London-based trend forecasting agency Kjaer Global
Dr. James Canton, CEO of the San Francisco-based Institute for Global Futures
Jason Silva, host of National Geographic Channel's "Brain Games”
Dr. Amy Zalman, CEO & president of the World Future Society
Mark Stevenson, author of "An Optimist's Tour of the Future"
On the topic of innovation and potential for paradigmatic change - here’s something related to the financial innovation - including the Blockchain.
Financial Industry Betting Big on Innovation
There’s been a huge wave of investment in finance and banking innovation over the last few years and the biggest players are making sure they won’t get left behind.
The big news yesterday was that of HSBC, which announced that it would be cutting up to 50,000 jobs (on top of the 37,000 jobs cut since 2011). It’s been a tough few years for the world’s biggest banks given changes in the regulatory and business environment. As compliance costs have increased, banks, including HSBC, have had to trim riskier departments.
But buried within the announcement was another key tidbit: HSBC will spend $1 billion on improving its “digital and automation programs,” according to the Wall Street Journal. Banks aren’t just trimming down, they’re actively investing in a digital future.
HSBC is hardly the only one. Other recent examples include:
- Deutsche Bank: Last week, Germany’s largest bank announced that it was partnering with technology companies to open three “innovation labs” around the world: Berlin/Microsoft, London/HCL, and Silicon Valley/IBM. The move will be accompanied by EUR 1 billion on so-called “digital initiatives” over the next five years.
- UBS: The Swiss bank announced in April that it was opening a “blockchain research lab” in London to explore the merits of so-called blockchain technology in the context of financial services, with Chief Information Officer Oliver Bussmann noting that these emerging innovations could “not only change the way we do payments but it will change the whole trading and settlement topic.”
- Nasdaq: The US exchange operator recently revealed its plans to trial distributed ledgers to power securities settlement for private equities. CEO Bob Greifeld wants Nasdaq to be “a leader in the field and is a “big believer” in the power of new emerging technologies “to effect fundamental change in the infrastructure of the financial services industry.”
- CME Group: Derivatives and futures exchange operator CME Group announced last year that it was looking to, according to Crain’s Chicago Business, that it was “looking to develop a pipeline to technology trends that could spark new businesses.” In a blog post last month, Rumi Morales, executive director of the Strategic Investment Group, wrote that, “among the technologies that will transform the financial services sector in the future, real-time payments is entering stage-right.” [CME Group is an investor in Ripple Labs.]
Those are just a handful of examples.
Here’s another view of the looming change approaching Big Banks.
The end of the big banks is coming
In the near future, banks will no longer be on street corners or housed in high-rise towers, they will be in our pockets or on our wrists.
Mobile banking is going viral, and the first adopters are the “millennials” (those born after 1981) and the “unbanked” poor of all ages around the world. They are leaping into this brave new banking world first, but the rest of us are going to follow.
I was in New York last week to take part in a conference held by Silicon Valley’s Singularity University on the bank of the future. Traditional banks will be replaced with low-cost tech solutions, and there will be cheering from the sidelines. Viacom Media polled 10,000 Americans and found that 4 of the 10 least-loved brands in the U.S. are banks and 71 percent said they would rather go to the dentist than listen to what any bank says.
Enter “fintech”: a burgeoning industry that includes Apple, Facebook and 8,000 start-ups in the U.S. that are targeting the financial sector’s various profit centers. Within five years, smartphones or smartwatches will replace credit and debit cards, wallets, (keys), lenders, brokers, insurance agents and money transfers abroad.
Further down the road, asset management, and hedge funds, will be replaced by online cheap or free ETFs (exchange traded funds). Wealth management will be done online by robo-advisors, and investment banking will be disrupted dramatically by crowdfunding when U.S. regulators give the green light shortly to the sale of equities. “And there’s nothing the banks can do about it,” she said.
Strangely, Kenya and telecom-tech giant Vodafone are the world’s pioneers in the repurposing of tech to replace traditional banks. Kenya’s banking system is so badly broken and inadequate that Vodafone started a cellphone network called M-Pesa (Swahili for Mobile and Money) and 60% of the country’s adults have joined, some 17 million. Users can deposit, withdraw, transfer money and pay for goods and services with their mobile device. Deposits and withdrawals are made at cellphone retail outlets or with phone sales personnel, acting as banking agents. It’s now in Afghanistan, South Africa, India and Eastern Europe.
And why is it that every solution enacts a whole new field of pressing problems as well as adjacent possible (some of which will be wicked problems as well)? This is an interesting view on the consequences of complexity.
Why Our Genome and Technology Are Both Riddled With “Crawling Horrors”
When we build complex technologies, despite our best efforts and our desire for clean logic, they often end up being far messier than we intend. They often end up kluges: inelegant solutions that work just well enough. And a reason they end up being messy—despite being designed and engineered—is because fundamentally the way they grow and evolve is often more similar to biological systems than we realize.
When we discuss biology and technology, we have a tendency to talk about them as metaphorically similar: the brain as a computer, the evolutionary history of the airplane, even viruses on our machines. This imagery often elides the basic ways in which they are very different, and can lead us along incorrect ways of thinking. The brain is quite different from a laptop in how it works, and when we confuse natural selection with engineering design, we end up imposing teleology where it has no business. Despite this, it turns out that when it comes to viewing these both as complex systems and how they have ended up as kluges, there are some deep similarities.
Fundamentally, when we improve and build on to large technological systems—especially ones that are essential to our infrastructure and can ill afford downtime—it is a lot easier to tinker at the edges of these systems than to start from scratch. When we do this, we end up building the large piles that Ovid bemoaned, with extra pieces that might be obsolete, or not even understandable anymore. We get such artifacts as legacy code, the old components that everything newer is built upon in order to make it function. If you recently submitted your tax returns to the U.S. Internal Revenue Service, you might have used a website, but that is simply a modern door slapped on a dingy old building. Behind this modern interface lies computer software and hardware that are decades old. For example, these systems “process Americans’ tax returns by churning through millions of lines of assembly code written by hand in the early 1960s.” Many records are stored on magnetic tapes and paper returns entered by hand. Supporting the final Space Shuttle mission, in 2011, were “five IBM AP-101s, 25-year-old avionics computers whose technical specifications are exceeded by [2011’s] average smartphone.”
If the question is still about a digital divide between developed and less developed parts of the world - here’s a possibility that could transform that divide. Perhaps this is something that should be undertaken by a global public organization - such as the UN or some other non-profit organization - the idea is to build a global public infrastructure.
Elon Musk's SpaceX Plans To Launch 4,000 Satellites, Broadcasting Internet To Entire World
Elon Musk's SpaceX has officially asked for permission to build a constellation of 4,000 satellites capable of beaming the Internet to the most remote regions of the earth. The plan, outlined in a request to the Federal Communications Commission, would transform Musk's rocket company into a new Internet service provider to compete with the likes of Verizon and Comcast. And you thought you were having a productive day.
Musk first revealed his company's plan during a SpaceX event in January, though he didn't formally request authorization from the FCC until last month (the news was first revealed Wednesday by the Washington Post). The plan centers around the idea that SpaceX's Falcon 9 rocket would launch into orbit, then deploy a fleet of satellites that broadcast Internet signals to various points across the globe.
Here’s an interesting step in the progress toward optical computing.
IBM’s Silicon Photonics Technology Ready to Speed up Cloud and Big Data Applications
For the first time, IBM engineers have designed and tested a fully integrated wavelength multiplexed silicon photonics chip, which will soon enable manufacturing of 100 Gb/s optical transceivers. This will allow datacenters to offer greater data rates and bandwidth for cloud computing and Big Data applications.
“Making silicon photonics technology ready for widespread commercial use will help the semiconductor industry keep pace with ever-growing demands in computing power driven by Big Data and cloud services,” said Arvind Krishna, senior vice president and director of IBM Research. “Just as fiber optics revolutionized the telecommunications industry by speeding up the flow of data -- bringing enormous benefits to consumers -- we’re excited about the potential of replacing electric signals with pulses of light. This technology is designed to make future computing systems faster and more energy efficient, while enabling customers to capture insights from Big Data in real time.”
Silicon photonics uses tiny optical components to send light pulses to transfer large volumes of data at very high speed between computer chips in servers, large datacenters, and supercomputers, overcoming the limitations of congested data traffic and high-cost traditional interconnects. IBM’s breakthrough enables the integration of different optical components side-by-side with electrical circuits on a single silicon chip using sub-100nm semiconductor technology.
IBM’s silicon photonics chips uses four distinct colors of light travelling within an optical fiber, rather than traditional copper wiring, to transmit data in and around a computing system. In just one second, this new transceiver is estimated to be capable of digitally sharing 63 million tweets or six million images, or downloading an entire high-definition digital movie in just two seconds.
The technology industry is entering a new era of computing that requires IT systems and cloud computing services to process and analyze huge volumes of Big Data in real time, both within datacenters and particularly between cloud computing services. This requires that data be rapidly moved between system components without congestion. Silicon photonics greatly reduces data bottlenecks inside of systems and between computing components, improving response times and delivering faster insights from Big Data.
And another breakthrough.
STANFORD ENGINEERS' BREAKTHROUGH HERALDS SUPER-EFFICIENT LIGHT-BASED COMPUTERS
Light can transmit more data while consuming far less power than electricity, and an engineering feat brings optical data transport closer to replacing wires.
Stanford electrical engineer Jelena Vuckovic wants to make computers faster and more efficient by reinventing how they send data back and forth between chips, where the work is done.
In computers today, data is pushed through wires as a stream of electrons. That takes a lot of power, which helps explain why laptops get so warm.
"Several years ago, my colleague David Miller carefully analyzed power consumption in computers, and the results were striking," said Vuckovic, referring to David Miller, the W.M. Keck Foundation Professor of Electrical Engineering. "Up to 80 percent of the microprocessor power is consumed by sending data over the wires – so-called interconnects."
In a Nature Photonics article whose lead author is Stanford graduate student Alexander Piggott, Vuckovic, a professor of electrical engineering, and her team explain a process that could revolutionize computing by making it practical to use light instead of electricity to carry data inside computers.
Here’s a 5 min TED talk on programming bacteria. This is a must view - just to ensure an imagining of the domestication of DNA.
Tal Danino: Programming bacteria to detect cancer (and maybe treat it)
Liver cancer is one of the most difficult cancers to detect, but synthetic biologist Tal Danino had a left-field thought: What if we could create a probiotic, edible bacteria that was "programmed" to find liver tumors? His insight exploits something we're just beginning to understand about bacteria: their power of quorum sensing, or doing something together once they reach critical mass. Danino, a TED Fellow, explains how quorum sensing works — and how clever bacteria working together could someday change cancer treatment.
In the domain microbes - here’s another advance in the knowledge of their ecologies and apparently our own gene pools.
Ocean's microbiome has incredible diversity – and human likeness
We're a step closer to understanding the microbial community that inhabits the ocean – and it has some striking similarities to the community that lives inside our guts. The microbiome of the world's biggest ecosystem and one of the smallest appear to function in surprisingly similar ways.
Microscopic plankton produce a large proportion of the oxygen in the atmosphere – amounting to half of all oxygen produced by photosynthesis – but we know very little about these organisms. The data collected by researchers aboard the schooner Tara will change that. Between 2009 and 2013, the ship sailed the world's seas and oceans, collecting 35,000 plankton samples – both microbial and multicellular – from the upper layers of the water.
The first batch of the Tara studies is published today, and it reveals that planktonic marine life is far more diverse than anyone expected. For example, we already knew of about 4350 species of microalgae, 1350 species of protists and 5500 species of tiny animals, based on direct studies of their appearance. But the new genetic evidence suggests that there are probably three to eight times as many distinct species in each group as currently recognised.
Shinichi Sunagawa at the European Molecular Biology Laboratory in Heidelberg, Germany, and his colleagues used the genetic samples from the Tara voyage to create the Ocean Microbial Reference Gene Catalogue. It contains over 40 million genes from more than 35,000 species.
The possibilities of mind-computer interfaces are growing every day - Here’s a very recent development.
Injectable device delivers nano-view of the brain
Promise against disease in electronic scaffolds
It’s a notion that might have come from the pages of a science-fiction novel — an electronic device that can be injected directly into the brain, or other body parts, and treat everything from neurodegenerative disorders to paralysis.
Sounds unlikely, until you visit Charles Lieber’s lab.
Led by Lieber, the Mark Hyman Jr. Professor of Chemistry, an international team of researchers has developed a method of fabricating nanoscale electronic scaffolds that can be injected via syringe. The scaffolds can then be connected to devices and used to monitor neural activity, stimulate tissues, or even promote regeneration of neurons. The research is described in a June 8 paper in Nature Nanotechnology.
“I do feel that this has the potential to be revolutionary,” Lieber said. “This opens up a completely new frontier where we can explore the interface between electronic structures and biology. For the past 30 years, people have made incremental improvements in micro-fabrication techniques that have allowed us to make rigid probes smaller and smaller, but no one has addressed this issue — the electronics/cellular interface — at the level at which biology works.”
“Existing techniques [for implanting electronics into the brain to treat a variety of disorders] …cause inflammation…”
“But with our injectable electronics, it’s as if it’s not there at all.”
“They are one million times more flexible than any state-of-the-art flexible electronics”
“They’re what I call ‘neuro-philic’ — they actually like to interact with neurons.”
Here’s a 20 min TED talk that discusses a new view of baby’s and logic. - Babies have to generalize from small samples all the time. :)
Laura Schulz: The surprisingly logical minds of babies
How do babies learn so much from so little so quickly? In a fun, experiment-filled talk, cognitive scientist Laura Schulz shows how our young ones make decisions with a surprisingly strong sense of logic, well before they can talk.
The title of this article doesn’t illuminate the depth of the emerging world of Virtual Reality - a depth of vast new ways to research perceptions (physical, psychological, emotional) involved in providing a person with a sense of ‘real experience’ - of presence. It doesn’t even illuminate the engineering challenges. This 58 min - for anyone interested in the future of virtual presence, Michael Abrash’s presentation (28 min of first half) is a must see.
Why Virtual Reality Isn't (Just) the Next Big Platform : Michael Abrash & Dov Katz of Oculus VR
Abstract
There’s been a lot of talk lately about how VR is the next big platform, but that’s not really accurate; it’s something bigger and more fundamental, nothing less than a phase change in the way that we interact with information. This talk will discuss why that’s so, what going to be involved in getting to that point, and why VR is going to open up huge new research and development opportunities.
VR is coupled with our embodiment more than any existing interface. A compelling and immersive VR experience requires that our interactions with the physical world are mapped precisely and with low latency onto the virtual environment. This creates a variety of challenges, including hardware design, processing multi modal sensor data, and filtering. The second part of the talk will focus on our current solution to the problem of head tracking.
Biography
Over the last 30 years, Michael has worked at companies that made graphics hardware, computer-based instrumentation, and rendering software, been the GDI lead for the first couple of versions of Windows NT, worked with John Carmack on Quake, worked on Xbox and Xbox 360, written or co-written at least four software rasterizers (the last one of which, written at RAD Game Tools, turned into Intel’s late, lamented Larrabee project), and worked on VR at Valve. Along the way he wrote a bunch of magazine articles and columns for Dr. Dobb’s Journal, PC Techniques, PC Tech Journal, and Programmer’s Journal, as well as several books. He’s been lucky enough to have more opportunities to work on interesting stuff than he could ever have imagined when he almost failed sixth grade because he spent all his time reading science fiction. He is currently Chief Scientist at Oculus VR, and thinks VR is going to be the most interesting and important project of all.
This article will be of interest to anyone concerned with the future of drones and especially autonomous drones.
MIT engineers hand “cognitive” control to underwater robots
With MIT-developed algorithms, robots plan underwater missions autonomously.
For the last decade, scientists have deployed increasingly capable underwater robots to map and monitor pockets of the ocean to track the health of fisheries, and survey marine habitats and species. In general, such robots are effective at carrying out low-level tasks, specifically assigned to them by human engineers — a tedious and time-consuming process for the engineers.
When deploying autonomous underwater vehicles (AUVs), much of an engineer’s time is spent writing scripts, or low-level commands, in order to direct a robot to carry out a mission plan. Now a new programming approach developed by MIT engineers gives robots more “cognitive” capabilities, enabling humans to specify high-level goals, while a robot performs high-level decision-making to figure out how to achieve these goals.
For example, an engineer may give a robot a list of goal locations to explore, along with any time constraints, as well as physical directions, such as staying a certain distance above the seafloor. Using the system devised by the MIT team, the robot can then plan out a mission, choosing which locations to explore, in what order, within a given timeframe. If an unforeseen event prevents the robot from completing a task, it can choose to drop that task, or reconfigure the hardware to recover from a failure, on the fly.
Amazing - each time I see either 'big dog' or 'spot' or their siblings - they are evermore elegant. There three short video and a picture of the latest iteration of ‘Spot’ - a must see - just to grasp the speed of this trajectory.
Innovations: Google’s robotic ‘dog’ is now showing off dance moves
It’s not technically summer yet, but, hey, this is June, so it’s barbecue season! Everybody’s doing it. Hamburgers, hot dogs, the usual. Google Ventures, the investment arm of the search giant, got in on the fun Tuesday night. Of course this is Google, so there was some bleeding-edge technology on display.
Remember Spot, the robot that looks vaguely like a dog? We first met him in February when he was trotting around parking lots and drawing sympathy for absorbing vicious kicks.
Well, tech moves quickly, so Spot had some new material. He hopped around on two of his legs while maintaining balance. The robot has a long way to go before he’d stand a chance on “America’s Got Talent,” but progress is progress. Spot also did something weird that looked vaguely like stretching, in which he leaned back on his hind legs.
The question of human-sensor-computer interface is progressing in other ways as well. Our world will be filled with internal and external sensors - so much that the inevitable question of where does a person end and the world begin?
What it Will Take to Make Health Monitors Smarter
Self-powered sensors are being integrated into fabric
Wearable health and fitness trackers are among the most popular gadgets around. More than 13 million of them were sold globally last year, according to GfK, a German market research firm. And these numbers are only expected to grow as more sophisticated versions hit the market, with claims that they can monitor vital signs such as blood pressure, heart rate, and even hydration levels.
But a few problems with activity trackers must be addressed before they can be used as medical monitors. Trackers tend to contain too few sensors, must be charged frequently, and can only be worn on the wrist or clipped to clothing. And perhaps most troubling, they aren’t accurate enough.
“Their level of accuracy is low because as a consumer product they haven’t gone through the rigorous regulatory approval of medical devices,” says IEEE Fellow Veena Misra. “But users want this type of technology.”
Misra is director of the U.S. National Science Foundation Nanosystems Engineering Research Center for Advanced Self-Powered Systems of Integrated Sensors and Technologies, better known as ASSIST. The center is located at North Carolina State University, in Raleigh, where she is also a professor of electrical and computer engineering. She is the lead author of “Flexible Technologies for Self-Powered Wearable Health and Environmental Sensing,” published in April in Proceedings of the IEEE.
At first sight/read this new research may not be related to how to enact new form of research analysis in all domains of science and medicine - but the level of qualitative analytic work via Big Quantities is interesting - What the deep learning engines of the next decade will bring?
Machine Vision Algorithm Chooses the Most Creative Paintings in History
Picking the most creative paintings is a network problem akin to finding super spreaders of disease. That’s allowed a machine to pick out the most creative paintings in history.
Creativity is one of humanity’s uniquely defining qualities. Numerous thinkers have explored the qualities that creativity must have, and most pick out two important factors: whatever the process of creativity produces, it must be novel and it must be influential.
The history of art is filled with good examples in the form of paintings that are unlike any that have appeared before and that have hugely influenced those that follow. Leonardo’s 1469 Madonna and child with a pomegranate, Goya’s 1780 Christ crucified or Monet’s 1865 Haystacks at Chailly at sunrise and so on. Others paintings are more derivative, showing many similarities with those that have gone before and so are thought of as less creative.
The job of distinguishing the most creative from the others falls to art historians. And it is no easy task. It requires, at the very least, an encyclopedic knowledge of the history of art. The historian must then spot novel features and be able to recognize similar features in future paintings to determine their influence.
Those are tricky tasks for a human and until recently, it would have been unimaginable that a computer could take them on. But today that changes thanks to the work of Ahmed Elgammal and Babak Saleh at Rutgers University in New Jersey, who say they have a machine that can do just this.
They’ve put it to work on a database of some 62,000 pictures of fine art paintings to determine those that are the most creative in history. The results provide a new way to explore the history of art and the role that creativity has played in it.
Another interesting development in the progress of robotics. But even more important is the insight that while we often seek the ‘optimal solution as best’ what is better is the feasible
...the algorithm does not provide an optimal jumping control, but rather, only a feasible one.
“If you want to optimize for, say, energy efficiency, you would want the robot to barely clear the obstacle — but that’s dangerous, and finding a truly optimal solution would take a lot of computing time,” Kim says. “In running, we don’t want to spend a lot of time to find a better solution. We just want one that’s feasible.”
Sometimes, that means the robot may jump much higher than it needs to — and that’s OK, according to Kim: “We’re too obsessed with optimal solutions. This is one example where you just have to be good enough, because you’re running, and have to make a decision very quickly.”
MIT cheetah robot lands the running jump
Robot sees, clears hurdles while bounding at 5 mph.
In a leap for robot development, the MIT researchers who built a robotic cheetah have now trained it to see and jump over hurdles as it runs — making this the first four-legged robot to run and jump over obstacles autonomously.
To get a running jump, the robot plans out its path, much like a human runner: As it detects an approaching obstacle, it estimates that object’s height and distance. The robot gauges the best position from which to jump, and adjusts its stride to land just short of the obstacle, before exerting enough force to push up and over. Based on the obstacle’s height, the robot then applies a certain amount of force to land safely, before resuming its initial pace.
In experiments on a treadmill and an indoor track, the cheetah robot successfully cleared obstacles up to 18 inches tall — more than half of the robot’s own height — while maintaining an average running speed of 5 miles per hour.
“A running jump is a truly dynamic behavior,” says Sangbae Kim, an assistant professor of mechanical engineering at MIT. “You have to manage balance and energy, and be able to handle impact after landing. Our robot is specifically designed for those highly dynamic behaviors.”
Here’s a significant pivot point related to solar energy.
Solar power passes 1% global threshold
Solar power now covers more than 1% of global electricity demand. In three countries in Europe – Italy, Germany and Greece – solar PV supplies more than 7% of electricity demand. This is reported by Solar Power Europe (previously EPIA – European Photovoltaic Industry Association). China is the fastest growing market. Research company GlobalData has adjusted projected new capacity in China for 2015 upwards.
Last year 40 GW of new solar capacity was installed worldwide, compared to 38.4 GW in 2013, notes Solar Power Europe (SPE) in its Global Market Outlook 2015-2019.
Cumulative capacity is now 178 GW. In terms of generation, this is equivalent to 33 coal-fired power stations of 1 GW, notes SPE. In Europe last year 7 GW was installed, which was less than in 2013. The UK was the fastest growing market, contributing 2.4 GW. Europe now installs less solar power capacity than China or Japan individually, but still more than the US. However, Europe is still the world’s largest player with more than 88 GW installed at the end of 2014.
Here’s a new report from the US - perhaps 2015 will be seen that the year of the tipping point? Whatever - the next decade will see significant change.
Residential solar installs post largest quarterly growth ever
Analysts expect a 24% increase in solar power this year
Residential installations of rooftop photovoltaic (PV) panels in the U.S. led the solar power market in the first quarter of this year, posting a record sequential 11% growth rate. That's the largest such uptick in history.
Residential systems were up 76%, compared with the first quarter of 2014, according to a U.S. Solar Market Insight report released today.
In all, the U.S. solar market saw just over 1.3 gigawatts (GW) of capacity installed in the first quarter, according to the report. It was the sixth consecutive quarter that solar power capacity in the U.S. grew by more than 1GW.
"We forecast that PV installations will reach 7.9GW in 2015, up 27% over 2014," the report stated.
While we’re on the topic of health - this is just very cool.
Virtual Reality Helps Stroke Patients Recover Use of Arm
Using virtual reality to increase a patient’s confidence in using their paralyzed arm may be critical for recovery, according to research published in the open-access Journal of NeuroEngineering and Rehabilitation.
Virtual reality could assist arm rehabilitation in some stroke patients, according to a clinical pilot study published in the open access Journal of NeuroEngineering and Rehabilitation. The researchers found that using virtual reality to increase a patient’s confidence in using their paralyzed arm may be critical for recovery.
Stroke patients with ‘hemiparesis’ – reduced muscle strength on one side of the body – often underuse their affected limbs even though they still have some motor function.
Using their healthy limb may immediately improve the ease of their daily activities, but a long period of non-use of the affected ‘paretic’ limb can lead to further loss of function. This so-called ‘learned non-use’ is a well-known effect in stroke patients and has been associated with a reduced quality of life.
The small pilot study involved 20 hemiparetic stroke patients using the ‘Rehabilitation Gaming System’ with a Microsoft Kinect sensor. The system allows users to control a virtual body via their own movements, seen from a first-person perspective on a computer screen, with which they perform tasks in a virtual world.
Here’s an interesting article for anyone with doubts about the emerging real world impact of 3D printing.
3D printing just made space travel cheaper
Companies looking to launch satellites into space typically spend anywhere from $10-50 million per launch but thanks to 3D printing, those costs are set to drop in a big way.
For $4.9 million, businesses can use RocketLab to send small satellites into orbit. The firm's engine, called the Rutherford, is powered by an electric motor and is the first oxygen and hydrocarbon engine to use 3D printing for all its primary components. The New Zealand company is set to begin test flights this year and aims to launch weekly commercial operations next year.
"Our 'Electron' launch vehicle is designed with the purpose of liberating the small satellite market. The whole program is predicated on reducing costs and increasing launch frequency, making space more accessible to everyone" CEO Peter Beck told CNBC.
Those two factors are crucial to opening up space travel, widely seen as unaffordable with long wait times to get into orbit, according to Beck. But a 3D printed engine means an engine can be built in just three days, allowing RocketLab to deliver both on cost and frequency.
This may or may not be a fun game to play - but it does seem like an very interesting experiment in social computing.
Introducing Eco: An ecosystem sim where everyone must nurture a shared planet
Or face server-wide permadeath.
Eco is one of the most ambitious and original concept I've seen in a while. It tasks players with manipulating an ecosystem with one catch: all players on the server are working in the same environment. If the community fails to nurture the nature, it's permadeath for everyone and a new planet is spawned.
Each time a server is started, it begins at the dawn of civilisation. There will be some ecological disaster looming on the horizon, such as a flood, drought or meteor, several real-time weeks away. The community must advance their civilisation enough to thwart these deadly threats or it's bye bye cruel world for everyone.
But it won't just be a simple matter of clicking on panels to produce factories. Instead, players will have to use elements of nature. Plants and animals will have their own lives. Raze a forest without replanting it and you'll run out of materials to erect buildings. Eat too many of one kind of fish and they'll become extinct, which will have an adverse effect on those who prey on them. Build a factory too close to a river and you'll pollute it.
"This ecosystem is your only lifeline in a race against time, your source of resources that will either prevent humanity's destruction, or become the source of its destruction when the group squanders its resources," developer Strange Loop Games explained. "Thus you're facing two existential crises simultaneously: an external threat that you must avert, and the threat of causing your own destruction. A rock and a hard place."
To prevent trolls from simply ruining the game for everyone, the community will be able to vote on laws dictating the rules of the game. An example Strange Loop offered on its official site is that a group can vote on a law mandating that each person can chop down no more than 10 trees a day. After that, the game simply won't let you cut down another.
Looking at the future of the home and especially the kitchen - the new smart appliances won’t be just screens and links to our phones - but also will be capable of learning.
June, A Countertop Smart Oven, Launches With A $1,495 Price Tag
What happens when you put a bunch of people who have worked at Apple, GoPro, Path, Google, and so on, in a room and ask them to design an oven?
The answer, apparently, is June: a countertop smart oven that will cost around $1,495 and ship next spring. It’s about the size of a microwave and designed to operate more like a smart device than a typical oven. It sits on your counter and is loaded with all the technology you might expect in a smart home device, from smarter temperature sensors to connectivity with an app on your phone.
Perhaps most notable is its built-in camera that uses deep learning technology to determine what kind of food you are trying to make and then give suggestions based on that. It also comes with a temperature probe that will send alerts to your phone. The typical cooking example Van Horn gave in our conversation was a steak, which he said previously required a thermometer or cutting into it to determine if it’s cooked.
“You take the steak, put salt and pepper on it, put in the core temperature thermometer, plug [the thermometer] into the oven and keep the steak in the oven, and by the time the door is closed it’s smart enough to know that it’s a steak,” Van Horn said. “It knows how much it weighs and its starting core temperature. Depending on your preference, it can predict a time curve that leads it into the medium rare, and it sends my phone a push notification when it’s done. If you’re anxious, you can use a streaming feature which allows you to get a live video feed of your food.”