In the 21st Century curiosity will SKILL the cat.
...a way for organizations to apply algorithmic principles to make frequent, calibrated adjustments to their business models, resource allocation processes, and structures—without direction from the top.
That’s a provocative claim, but it’s based on actual developments we’ve observed at Internet companies like Google, Netflix, Amazon, and Alibaba. These enterprises have become extraordinarily good at automatically retooling their offerings for millions of individual customers, leveraging real-time data on their behavior. Those constant updates are, in fact, driven by algorithms, but the processes and technologies underlying the algorithms aren’t magic: it’s possible to pull them apart, see how they operate, and use that know-how in other settings. And that’s just what some of those same companies have started to do.
As the companies pioneering self-tuning algorithms grow and mature, they increasingly face the challenge of running versus reinventing themselves—and not just in the marketing department. No surprise, then, that some are introducing new managerial practices that extend self-tuning principles across the entire enterprise.
To understand how this works, think of the enterprise as a nested set of strategic processes. At the highest level, the vision articulates the direction and ambition of the firm as a whole. As a means to achieving the vision, a company deploys business models and strategies that bring together capabilities and assets to create advantageous positions. And it uses organizational structure, information systems, and culture to facilitate the effective operation of those business models and strategies.
Focus on seizing and shaping strategic opportunities, not on executing plans. In volatile environments, plans can quickly become out-of-date. In Alibaba’s case, rapid advances in technology, shifting consumer expectations in China and beyond, and regulatory uncertainty made it difficult to predict the future. To deal with this situation, Alibaba adopted a continuous process of “replanning.” Rather than meticulously executing a fixed, detailed blueprint, the company keeps revising its strategy and tactics as circumstances change.
The Self-Tuning Enterprise
...the purpose of copyright law is not to guarantee authors a living, nor is it to give them exclusive control over who uses their work and how. The purpose of the law is to provide an incentive for people to create artistic works because doing this benefits society...
Google's victory in book-scanning case is a huge win for fair use
By 2020, solar energy will be price-competitive with energy generated from fossil fuels on an unsubsidized basis in most parts of the world. Within the next decade, it will cost a fraction of what fossil-fuel-based alternatives do.
The coming era of unlimited — and free — clean energy
In creative, knowledge-based work it is increasingly difficult to know the best mix of capabilities and tasks in advance. Recruiting is becoming a matter of expensive guesswork. Matching the patterns of work with the capabilities of individuals beforehand is getting close to impossible. What, then, is the use of the organizational theater when it is literally impossible to define the organization before we actually do something? What if the organization really should be a process of emergent self-organizing in the way the platforms make possible?
Instead of thinking about the organization let’s think about organizing as an ongoing thing. Then the managerial task is to make possible very easy and very fast emergent responsive interaction and group formation. It has to be as easy as possible for the best contributions from the whole network to find the applicable contextual needs and people.
Instead of the topology or organizational boxes that are often the visual representation of work, the picture of work is a live social graph.
Traditional business economics focus on economies of scale derived from the resource base of the company, which scales much more slowly than the network effects the new firms are built on. The start-ups have a huge advantage over the incumbents.
In practice this means that the peer-to-peer platforms can attain the level of customer reach and network size required to capture almost any market, even as the size of the core (firm) stays relatively small.
Platforms are a valuable, shared resource making interactive value creation possible through organizing and simplifying participation. Sociologists have called such shared resources public goods. A private good is one that the owners can exclude others from using. Private was valuable and public without much value during the era of scarcity economics. This is now changing in a dramatic way, creating the intellectual confusion we are in the midst of today. The physical commons were, and still often are, over-exploited but the new commons follow a different logic. The more they are used, the more valuable they are for each participant.
From Firms to Platforms to Commons
Liberalism: The Implicit Tragedy of the Commons
Garrett Hardin is often cited for his 1968 essay, “The Tragedy of the Commons.” In this classic critique of common property management, Hardin gives the example of herders grazing their cattle on a shared parcel of land. He observes that these individual herdsmen, acting out of self-interest, will put more and more cattle in the pasture. This overgrazing, he says, will deplete or perhaps destroy the field’s limited resources of foliage and soil, which is not in the long-term interest of the group. The failure of such commons through disorganization and waste illustrates the need for external intervention, Hardin implies, whether through the private or public sectors. Yet numerous studies have shown that societies can successfully manage their resources using some form of collective property. Traditional communities which organize resources through customary practice, and modern social networks such as digital communities which rely on system architectures and cooperative standards, both demonstrate that tragedies of the commons are not inevitable. Despite broad differences in local circumstances, this research suggests, common property can be effectively managed using informal norms.
However, these analyses rarely explore the historical and philosophical contexts in which failed commons occur. They avoid the most basic questions. What exactly is the social nature of property? How does a property regime reflect the self-understanding of the people who use it? Why do individuals relate to property through shared assumptions about who they are? The study of resources, bodies, lives, minds and human interaction expands commons research from the realm of social analysis into the moral philosophy of the greater good. And the philosophy of what is good for society inevitably raises the question of the personal metaphysics behind property. What is the understanding of human nature in societies where failed commons occur? How do individuals rationalize a social belief system that encourages them to act against their own interests, as Hardin says, allowing their commons to fail? And why should we assume that property has but one meaning, which may be defined only within the context of liberal society?
Commonhood is the self-organizing and rule-guided practice of a community to preserve, make, manage or use a resource through collaboration.
When users are co-producers of the goods and services they receive and organize, their motivations, knowledge and skills become part of the production praxis, leading to new ways of interacting and coordinating natural, social and economic life. This collaboration between users, producers and suppliers over their shared property is the basis of commonhood.
The Failed Metaphysics Behind Private Property: Sharing our Commonhood
Here is a must read article for anyone concerned about Knowledge Management about Ursula K Leguin views on communication.
Telling Is Listening: Ursula K. Le Guin on the Magic of Conversation and Why Human Communication Is Like Amoebas Having Sex
“Words are events, they do things, change things. They transform both speaker and hearer; they feed energy back and forth and amplify it. They feed understanding or emotion back and forth and amplify it.”
Every act of communication is an act of tremendous courage in which we give ourselves over to two parallel possibilities: the possibility of planting into another mind a seed sprouted in ours and watching it blossom into a breathtaking flower of mutual understanding; and the possibility of being wholly misunderstood, reduced to a withering weed. Candor and clarity go a long way in fertilizing the soil, but in the end there is always a degree of unpredictability in the climate of communication — even the warmest intention can be met with frost. Yet something impels us to hold these possibilities in both hands and go on surrendering to the beauty and terror of conversation, that ancient and abiding human gift. And the most magical thing, the most sacred thing, is that whichever the outcome, we end up having transformed one another in this vulnerable-making process of speaking and listening.
Why and how we do that is what Ursula K. Le Guin (b. October 21, 1929) explores in a magnificent piece titled“Telling Is Listening” found in The Wave in the Mind: Talks and Essays on the Writer, the Reader, and the Imagination (public library), which also gave us her spectacular meditations on on being a man and what beauty really means.
But the magic of human communication, Le Guin observes, is that something other than mere information is being transmitted — something more intangible yet more real
In most cases of people actually talking to one another, human communication cannot be reduced to information. The message not only involves, it is, a relationship between speaker and hearer. The medium in which the message is embedded is immensely complex, infinitely more than a code: it is a language, a function of a society, a culture, in which the language, the speaker, and the hearer are all embedded.
Paralleling Hannah Arendt’s assertion that “nothing and nobody exists in this world whose very being does not presuppose a spectator,” Le Guin points out that all speech invariably presupposes a listener.
Listening is not a reaction, it is a connection. Listening to a conversation or a story, we don’t so much respond as join in — become part of the action.
This is an excellent article summarizing the trajectory of the future of work in a near-zero transaction and marginal cost economy.
From Firms to Platforms to Commons
Many people see peer-to-peer platforms as game changers in the world of work with the potential of reinventing the economy and giving individuals the power of the corporation. Others are sceptical and warn that the new architectures of participation and choice are in reality architectures of exploitation, giving rise to a new class of workers, “the precariat”, people who endure insecure conditions, very short-term work and low wages with no collective bargaining power, abandoned by the employee unions, rendering them atomized and powerless.
I have just finished reading “PEERS INC”, an excellent book by Robin Chase. It is both a practical guide and a textbook that explains what is happening today in the (almost) zero transaction cost economy, the digitally enabled new world that has given rise to peer-to-peer platforms as the most modern iteration of the firm.
Robin Chase explains well how the patterns of work and the roles of workers are becoming very different from what we are used to: the industrial production of physical goods was financial capital-intensive, leading to centralized management and manufacturing facilities where you needed to be during predetermined hours. The industrial era created the employers, the employees and the shareholder capitalism we now experience.
In the network economy, individuals, interacting voluntarily with each other by utilizing the new platforms/apps and relatively cheap mobile devices they own themselves, can create value, and, even more importantly, utilize resources and available “excess capacity” as Robin Chase calls it. Business can be done in a much more sustainable way than was possible during the industrial era.
This is a great article - well worth the time to read.
There's a Dark Side to Heroic Leadership, and It's Haunting Most Organizations
In her recent TED Talk, Margaret Heffernan, an entrepreneur, former CEO of five companies and author, discussed an experiment on productivity by evolutionary biologist William Muir at Purdue University. Muir was interested in productivity and leverages experiments with chickens for one simple reason — their productivity is easy to measure because you can just count the eggs. He wanted to know what factors can make chickens more productive, so he devised a beautiful experiment.
Chickens live in groups, so he divided a flock of chickens into two groups. The first group was left alone for six generations. In the second group, Muir only selected the most productive chickens for breeding — those who produced the most eggs — to create a “superflock” of “superchickens.”
After six generations had passed, what did he find? Well, the first group, the average group, was doing just fine. They were all plump, fully feathered. Their egg production had increased dramatically.
In the second group, all but three superchickens were dead. They had pecked all of the others to death. The individually productive chickens in the “superflock” had only survived and achieved success by suppressing the productivity of the rest.
For the past 50 years, we’ve run most organizations and some societies along the superchicken model. We assume that picking the superstars; the brightest and most knowledgeable men, or women, in the room, and giving them all the resources and all the power is the key to achieving success. The result has been just the same as in William Muir’s experiment: Aggression, dysfunction and waste.
But why is that? At least part of the reason is because we live and operate within a culture that values and personifies leaders as super heros, above everything else.
A few Friday Thinkings ago I included a story about working at Amazon - it seems that the story left out quite a bit… Well it also seems that even ‘trusted’ news sources can produce less than trustworthy reporting. Here’s another side to the same story.
What The New York Times Didn’t Tell You
“Nearly every person I worked with, I saw cry at their desk.”
If you read the recent New York Times article about Amazon’s culture, you remember that quote. Attributed to Bo Olson, the image of countless employees crying at their desks set the tone for a front-page story that other media outlets described as “scathing,” “blistering,” “brutal” and “harsh.” Olson’s words were so key to the narrative the Times wished to construct that they splashed them in large type just below the headline.
Here’s what the story didn’t tell you about Mr. Olson: his brief tenure at Amazon ended after an investigation revealed he had attempted to defraud vendors and conceal it by falsifying business records. When confronted with the evidence, he admitted it and resigned immediately.
Why weren’t readers given that information? The Times boasts that the two reporters with bylines on the story, Jodi Kantor and David Streitfeld, spent six months working on it. We were in regular communication with Ms. Kantor from February through the publication date in mid-August. And yet somehow she never found the time, or inclination, to ask us about the credibility of a named source whose vivid quote would serve as a lynchpin for the entire piece. Did Ms. Kantor’s editors at the Times ask her whether Mr. Olson might have an axe to grind? Or under what circumstances Mr. Olson’s employment at Amazon was terminated? Even with breaking news, journalistic standards would encourage working hard to uncover any bias in a key source. With six months to work on the story, journalistic standards absolutely require it.
If only it were an isolated mistake. In fact, Kantor never asked us to check or comment on any of the dozen or so negative anecdotes from named sources that form the narrative backbone of the story. If she had, Times readers would have learned a few other things….
Here’s something that looks like the future of social science research (even though IBM has not been known as a social science research organization). The key questions IBM’s trajectory asks are - what technology platforms and scientific capabilities will a social science research organization need in the next decade?
IBM's Next Big Move for Big Data: Cognitive Computing
IBM's new consulting unit, Cognitive Business Solutions, which is based on its Watson artificial intelligence computer system, is the latest move by big blue to take what it has learned in working with big data to chart the course for the company's future. IBM's Senior Vice President of Global Business Services Bridget van Kralingen said, "Cognitive is the future of IBM, very much powered by Watson."
The new business unit will have roughly 2,000 employees and will create "a global center of competence...where we develop the capability of our people and that consists of data scientists, mathematicians, people with advanced analytics and then people with knowledge of the various industries --and that is used to seed practices around the world."
Watson allows IBM's clients to analyze big quantities of data and provides answers based on evidence and discovering patterns that could be invisible to the human eye. Van Kralingen said, "what we're coming to understand is that this [Watson] is now getting that good where you can imagine a world where almost every process in an organization can be improved by the use of cognitive -- both from an external point of view and internal efficiency."
IBM's pivot to cognitive computing and away from hardware could pay off handsomely. According to its report, International Data Corporation found that by 2018 half of all consumers will interact with services based on cognitive computing on a regular basis.
Here’s something that may give IBM’s Watson a run for it’s value - or maybe, eventually give Watson some Augmented computational capacity. It’s still not ready for primetime, HP has seemed lethargic in providing the resources to accelerate the development of the Memristor - but this does represent an emerging alternative computational paradigm that will also likely need corresponding operating and software application languages.
HP and SanDisk join forces to finally bring memristor-like tech to market
Intel and Micron's "holy grail" of memory—3D XPoint—doesn't even exist yet, but HP and SanDisk have now announced that they're partnering up to make their own competing version of this so-called "storage class memory." There's something about the HP/SanDisk announcement that feels a bit off, though, so bear that in mind as you read this story.
Back in July, Intel and Micron announced that it had created a new type of memory that has 1,000 times the performance and endurance of NAND flash, while also being 10 times denser than DRAM. There were some power and cost reductions touted, too. All in all, it was exactly what the enterprise computing market has a hankerin' for: oodles of fast, cheap memory to keep massive databases and other big data-style stuff in-memory for faster computation.
At the time, Intel and Micron said the new memory (pronounced "Three-dee cross point") was already in production and would be sampled "later this year with select customers." At the time, we asked Intel for some more details—you know, so that we could work out what was actually happening under the bonnet—but so far, no such details or specifications have been forthcoming. We haven't yet heard of any customers receiving usable memory chips based on 3D XPoint tech, either.
The HP and SanDisk press release describes something very similar to 3D XPoint. In fact, it's nigh on identical: "The technology is expected to be up to 1,000 times faster than flash storage and offer up to 1,000 times more endurance than flash storage. It also is expected to offer significant cost, power, density and persistence improvements over DRAM technologies."
This is a great 20 min video by one of the founders of D-Wave Quantum Computers. For anyone who doesn’t understand what quantum computing is - this is a great introduction. Even if you think you understand - this talk makes clear - just how different -how profound the shift toward quantum computing is. The doubling of qbits on a quantum computing chip happens every year - faster than Moore’s Law.
Quantum Computing – Artificial Intelligence Is Here
Geordie Rose, Founder of D-Wave (recent clients are Google and NASA) believes that the power of quantum computing is that we can `exploit parallel universes’ to solve problems that we have no other means of confirming. Simply put, quantum computers can think exponentially faster and simultaneously such that as they mature they will out pace us.
For some other videos see this site
DWave System's main quantum computing revenue will software licensing and a cloud model
And in a related way this article advances the work on proving quantum entanglement.
Sorry, Einstein. Quantum Study Suggests ‘Spooky Action’ Is Real
In a landmark study, scientists at the Delft University of Technology in the Netherlands reported they have conducted an experiment they say proves one of the most fundamental claims of quantum theory — that objects separated by great distance can instantaneously affect each other’s behavior.
The finding is another blow to one of the bedrock principles of standard physics known as “locality,” which states that an object is directly influenced only by its immediate surroundings. The Delft study, published Wednesday in the journal Nature, lends further credence to an idea that Einstein famously rejected. He said quantum theory necessitated “spooky action at a distance,” and he refused to accept the notion that the universe could behave in such a strange and apparently random fashion.
Now you may have heard someone talk about the message of the digital environment being social computing - Here’s something that might sound similar - a social form of ‘cognitive computing’.
LINCing Operational Problems to Government Solvers
When operators in the field have a problem, they want a solution—now. And what if the solution already exists? What if, among the thousands of men and women who make up the DoD and federal workforce, someone has already solved the problem? That brilliant idea is like a needle in a haystack. We want to create a way for that brilliant idea to be found by the operators who need it. Instead of allowing that vast workforce to be a limitation, we want to leverage it, to see it not merely as a workforce, but as thousands of problem solvers.
On October 8, 2015, The Combating Terrorism Technical Support Office (CTTSO) launched the Laboratory Innovation Crowdsourcing (LINC) portal to present operational challenges to the entire body of government employees – military, civilian, and contractor – for solutions.
CTTSO is a DoD-hosted, interagency-focused office working to develop novel solutions to combat terrorism on behalf of the Armed Forces and U.S. federal, state, and local operators.
Cognitive computing and automated algorithmic analysis - with the addition of real-time Big Data - represents the emerging informational atmosphere we will soon be living, working and playing in and with.
Automating big-data analysis
System that replaces human intuition with algorithms outperforms 615 of 906 human teams
Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.
MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.
In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.
“We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.”
Maybe this is a ‘weak signal’ of the future - the need for a vast creative commons - to speed both innovation and more rapid solutions to the world’s pressing problems.
GOOD GUY NASA JUST RELEASED OVER 1,200 PATENTS TO TECH STARTUPS, FREE OF CHARGE
As the foremost aeronautics and space organization in the world, NASA has amassed an absolutely huge collection of patents over the years. These innovations have been closely guarded by the organization for decades, but just last week, the agency decided to pull an Elon Musk, and release more than 1,200 of its technology patents to the world.
The idea is that providing easier access to the agency’s patents will help foster innovation in the tech sector. “The Startup NASA initiative leverages the results of our cutting-edge research and development so entrepreneurs can take that research — and some risks — to create new products and new services,” explained David Miller, NASA’s chief technologist.
There is one catch to all of this, though. NASA isn’t fully releasing all of its patents. Instead, it’s basically giving tech startups an open invitation to license patented NASA tech with no upfront costs. The agency will waive the initial licensing fees, and there are no minimum fees for the first three years — but once a startup starts selling a product, NASA will collect a standard net royalty fee.
This is one of the first steps toward mass-customization of our clothing, imagine clothes made to fit you personally no wasted stock.
LIKEAGLOVE SMART LEGGINGS WILL HELP YOU FIND YOUR PERFECT PAIR OF JEANS
A couple years ago, I would complain about people texting me to check my email, or Facebook messaging me to log into Twitter, or just generally middle-manning technology. And now, my brain is preparing to explode once again. You can now buy smart leggings that will tell you what jeans to buy, because now, we need one piece of clothing to tell us how another will fit. Thanks to “smart leggings” LikeAGlove, you’ll now be able to scour the Internet for the perfect pair of jeans to fit your unique figure in just five seconds. So really, you’re just becoming your own personal shopper.
As per LikeAGlove’s website, “Our one-size-fits-all, smart elastic leggings will snuggly fit you no matter what size or shape you are. Just turn the leggings on with the press of a button and they are ready to accurately measure your figure.” Thanks to an abundance of sensors and conductive fibers woven into the clothing, after donning the leggings and pushing a rather unfortunately placed orange button, you’ll have key measurements like the length of your leg, waist, thigh, inseam, and upper and lower hips to go off of when you start the arduous task of jean shopping.
And soon a lot more smart things in our environment may be extensions of our minds.
Paralyzed Man’s Arm Wired to Receive Brain Signals
After doctors bridge his spinal injury with electronics, a paralyzed man can control his arm with his thoughts.
Scientists at Case Western Reserve University in Ohio say they’ve used electronics to get around a paralyzed man’s spinal injury, permitting him to use an implant in his brain to move his arm and hand.
The test represents the first time that signals collected in the brain have been conveyed directly to electrodes placed inside someone’s arm to restore movement, says Robert Kirsch, a biomedical engineer at Case Western. He also directs the Cleveland FES Center, which develops technologies for people with paralysis.
The project, described today at the meeting of the Society for Neuroscience in Chicago, is a step toward a wireless system able to transmit brain signals through the air to electronics sewn into the limbs of paralyzed people, thereby restoring the ability to carry out simple daily tasks.
2015 is coming to a close - in 2020 we will likely see it as the year of the tipping point - renewable energy, self-driving cars, automation, AI-Machine Learning, the Internet-of-Things. Here’s another ‘weak signal’ of the phase transition in mass-transit. Plus they are really cute.
EasyMile's driverless bus rolls-out in Singapore and California
A mix of localisation navigation obstacle detection/avoidance technologies are used to ensure the EZ10 stays on course and negotiates hazard
A driverless bus developed by French firm EasyMile is to go into operation at a business park in California and a park in Singapore. The EZ10 is operated entirely autonomously and doesn't even have a steering wheel. EasyMile says it hopes to have 100-200 EZ10s in operation by 2017.
Like the Lutz Pathfinder, the EZ10 is designed for last mile travel, such as between travel hubs and final destinations, or for looped routes within confined areas, like airports, city centers and business parks. For the time being, it will not be used on the road. It differs from the Pathfinder in that it is already in full operation in a number of locations around the world, and can accommodate 12 passengers instead of just two.
The EZ10 is fully electric and is powered by a lithium-ion battery, which can be fully charged in eight hours. This gives it up to 12 hours of operation and a range of up to 80 km (50 mi). It has an average cruising speed of up to 20 km/h (12 mph) and a top speed of 40 km/h (25 mph).
There is no special infrastructure required for the EZ10 to travel along, unlike with a tram or a train. Instead, a route or "virtual line" is created for the vehicle to follow and it can then repeat continually. A mix of localization navigation obstacle detection/avoidance technologies are used to ensure the EZ10 stays on course and negotiates hazards. Among the sources from which it pulls data are lidar, video, Differential Global Positioning System (DGPS) and odometry sensors (which estimate change in position over time).
There is a lot of concern about ‘hacking’ driverless cars - but then again the recent Volkswagen scandal indicates that ‘inside’ jobs are something to fear as well. This article should ease our worries - at least for awhile.
Why Car Hacking Is Nearly Impossible
Despite recent claims, your car is not about to get crashed by hackers
As scare-tactic journalism goes, it would be hard to beat this past summer's article about hackers taking remote control of a Wired magazine writer's car.
“I was driving 70 mph on the edge of downtown St. Louis,” he wrote. “As the two hackers remotely toyed with the air-conditioning, radio, and windshield wipers, I mentally congratulated myself on my courage under pressure. That's when they cut the transmission.”
Scary! Hackers can take over our cars! Our lives are at risk!
No, they're not.
Stories such as these are catnip to mainstream media and the technophobic public. Unfortunately, they leave out or underplay a detail or two that would take most of the air out of the drama: these aren't just any cars.
In the case of the Wired article, the Jeep belonged to the hackers. They had been working on it for more than a year to figure out how to hack it.
That's really a different story.
In February 60 Minutes ran a story about a similar experiment. “Oh, my God,” the correspondent exclaims as her brakes stop working. “That is frightening!”
But would it have been as frightening if she had mentioned that this kind of hack requires a car with cellular Internet service, that it had taken a team of researchers years to make it work—and that by then the automaker had fixed the software to make such a hack impossible for vehicles on the road?
Here’s some interesting research - it seems that sleep is a very ancient human issue. Personally I think it is because of us night owls that those damn early birds had fire to cook breakfast and had a safe night’s sleep.
What a nightmare: sleep no more plentiful in primitive cultures
Maybe we cannot blame late-night TV, endless Internet surfing, midnight snacks, good books, bothersome work deadlines and other distractions of modern life for encroaching on our sleep.
Research unveiled on Thursday showed that people in isolated and technologically primitive African and South American cultures get no more slumber than the rest of us.
Scientists tracked 94 adults from the Tsimane people of Bolivia, Hadza people of Tanzania and San people of Namibia for a combined 1,165 days in the first study on the sleep patterns of people in primitive foraging and hunting cultures.
Even without electricity or other modern trappings, they logged an average of 6 hours and 25 minutes of sleep daily, a figure near the low end of industrialized society averages.
"The bigger conclusion is not that they sleep less but that they very clearly do not sleep more, contrary to what has been assumed," said UCLA psychiatry professor Jerome Siegel.
Even the Washington Post understands the looming phase transition in geopolitics.
The coming era of unlimited — and free — clean energy
In the 1980s, leading consultants were skeptical about cellular phones. McKinsey & Company noted that the handsets were heavy, batteries didn’t last long, coverage was patchy, and the cost per minute was exorbitant. It predicted that in 20 years the total market size would be about 900,000 units, and advised AT&T to pull out. McKinsey was wrong, of course. There were more than 100 million cellular phones in use in 2000; there are billions now. Costs have fallen so far that even the poor — all over world — can afford a cellular phone.
The experts are saying the same about solar energy now. They note that after decades of development, solar power hardly supplies 1 percent of the world’s energy needs. They say that solar is inefficient, too expensive to install, and unreliable, and will fail without government subsidies. They too are wrong. Solar will be as ubiquitous as cellular phones are.
Futurist Ray Kurzweil notes that solar power has been doubling every two years for the past 30 years — as costs have been dropping. He says solar energy is only six doublings — or less than 14 years — away from meeting 100 percent of today’s energy needs. Energy usage will keep increasing, so this is a moving target. But, by Kurzweil’s estimates, inexpensive renewable sources will provide more energy than the world needs in less than 20 years. Even then, we will be using only one part in 10,000 of the sunlight that falls on the Earth.
In places such as Germany, Spain, Portugal, Australia, and the Southwest United States, residential-scale solar production has already reached “grid parity” with average residential electricity prices. In other words, it costs no more in the long term to install solar panels than to buy electricity from utility companies. The prices of solar panels have fallen 75 percent in the past five years alone and will fall much further as the technologies to create them improve and scale of production increases. By 2020, solar energy will be price-competitive with energy generated from fossil fuels on an unsubsidized basis in most parts of the world. Within the next decade, it will cost a fraction of what fossil-fuel-based alternatives do.
It isn’t just solar production that is advancing at a rapid rate; there are also technologies to harness the power of wind, biomass, thermal, tidal, and waste-breakdown energy, and research projects all over the world are working on improving their efficiency and effectiveness. Wind power, for example, has also come down sharply in price and is now competitive with the cost of new coal-burning power plants in the United States. It will, without doubt, give solar energy a run for its money. There will be breakthroughs in many different technologies, and these will accelerate overall progress.
Despite the skepticism of experts and criticism by naysayers, there is little doubt that we are heading into an era of unlimited and almost free clean energy. This has profound implications.
Now if free or zero-marginal cost energy still sounds far fetched - here’s an oil company that is finding solar energy to be cheaper to use than oil - even in the process of extracting oil!!
GlassPoint Oil-Producing Solar Farm Nears Construction Start
GlassPoint Solar Inc will begin work on one of the world’s largest solar park this month, with the first production planned for late 2017.
The 1-gigawatt solar-thermal project will turn water into steam for injection into an oilfield in Oman. The process is known as enhanced oil recovery or EOR and involves heating the ground to improve the flow of heavy crude to the surface.
The Fremont, California-based company is working with Petroleum Development Oman, a joint venture with Royal Dutch Shell Plc, Total SA and the government of Oman. The project is a landmark deal in terms of size but also because it also the first time that solar energy is used to produce oil at a commercial scale. Glasspoint previously did smaller pilot projects involving solar and oil.
GlassPoint’s steam-making facility will largely be run on the sun’s energy by day and natural gas at night. Solar-powered steam is 10 percent cheaper than natural gas in California. In Oman, it’s about 28 percent cheaper compared to the export price for liquefied natural gas.
GlassPoint is also considering to develop solar energy for other applications in the oil industry that use thermal heat such as desalination and desulfurization which remove salt and sulfur from water. It may eventually develop sun-powered plants for other industrial uses, but “not for decades.”
Accelerating the domestication of DNA - more customized pets now - enhancing humans in our lifetime. How far away is the DIY gene edited post-modern primitive enhancements?
First Gene-Edited Dogs Reported in China
An extra-muscular beagle has been created through genome engineering. Are we on our way to customizing the DNA of our pets?
Scientists in China say they are the first to use gene editing to produce customized dogs. They created a beagle with double the amount of muscle mass by deleting a gene called myostatin.
The dogs have “more muscles and are expected to have stronger running ability, which is good for hunting, police (military) applications,” Liangxue Lai, a researcher with the Key Laboratory of Regenerative Biology at the Guangzhou Institutes of Biomedicine and Health, said in an e-mail.
Lai and 28 colleagues reported their results last week in the Journal of Molecular Cell Biology, saying they intend to create dogs with other DNA mutations, including ones that mimic human diseases such as Parkinson’s and muscular dystrophy. “The goal of the research is to explore an approach to the generation of new disease dog models for biomedical research,” says Lai. “Dogs are very close to humans in terms of metabolic, physiological, and anatomical characteristics.”
Lai said his group had no plans breed to breed the extra-muscular beagles as pets. Other teams, however, could move quickly to commercialize gene-altered dogs, potentially editing their DNA to change their size, enhance their intelligence, or correct genetic illnesses. A different Chinese Institute, BGI, said in September it had begun selling miniature pigs, created via gene editing, for $1,600 each as novelty pets.
The Chinese beagle project was led by Lai and Gao Xiang, a specialist in genetic engineering of mice at Nanjing University. The dogs are being kept at the Guangzhou General Pharmaceutical Research Institute, which says on its website that it breeds more than 2,000 beagles a year for research. Beagles are commonly used in biomedical research in both China and the U.S.
And here’s a fascinating article - should make us think a bit deeper about collective sentience - if not about collective intelligence.
Biologists discover bacteria communicate like neurons in the brain
Biologists at UC San Diego have discovered that bacteria—often viewed as lowly, solitary creatures—are actually quite sophisticated in their social interactions and communicate with one another through similar electrical signaling mechanisms as neurons in the human brain.
In a study published in this week's advance online publication of Nature, the scientists detail the manner by which bacteria living in communities communicate with one another electrically through proteins called "ion channels."
"Our discovery not only changes the way we think about bacteria, but also how we think about our brain," said Gürol Süel, an associate professor of molecular biology at UC San Diego who headed the research project. "All of our senses, behavior and intelligence emerge from electrical communications among neurons in the brain mediated by ion channels. Now we find that bacteria use similar ion channels to communicate and resolve metabolic stress. Our discovery suggests that neurological disorders that are triggered by metabolic stress may have ancient bacterial origins, and could thus provide a new perspective on how to treat such conditions."
"Much of our understanding of electrical signaling in our brains is based on structural studies of bacterial ion channels" said Süel. But how bacteria use those ion channels remained a mystery until Süel and his colleagues embarked on an effort to examine long-range communication within biofilms—organized communities containing millions of densely packed bacterial cells. These communities of bacteria can form thin structures on surfaces—such as the tartar that develops on teeth—that are highly resistant to chemicals and antibiotics.
The scientists' interest in studying long-range signals grew out of a previous study, published in July in Nature, which found that biofilms are able to resolve social conflicts within their community of bacterial cells just like human societies.
When a biofilm composed of hundreds of thousands of Bacillus subtilis bacterial cells grows to a certain size, the researchers discovered, the protective outer edge of cells, with unrestricted access to nutrients, periodically stopped growing to allow nutrients—specifically glutamate, to flow to the sheltered center of the biofilm. In this way, the protected bacteria in the colony center were kept alive and could survive attacks by chemicals and antibiotics.
No comments:
Post a Comment