In the 21st Century curiosity will SKILL the cat.
You work more from your relations than your skills.
The productivity of an individual depends not just on being part of a community but being part of a particular community engaged in particular commitments. The context matters most.
What is now disappearing is not work, but the notion of a job.
Esko Kilpi - The new kernel of on-demand work
By 2020, many hospitals will have genomic medicine departments, designing medical therapies based on your personal genetic constitution. Gene sequencers – machines that can take a blood sample and reel off your entire genetic blueprint – will shrink below the size of USB drives. Supermarkets will have shelves of home DNA tests, perhaps nestled between the cosmetics and medicines, for everything from whether your baby will be good at sports to the breed of cat you just adopted, to whether your kitchen counter harbours enough ‘good bacteria’. We will all know someone who has had their genome probed for medical reasons, perhaps even ourselves. Personal DNA stories – including the quality of the bugs in your gut– will be the stuff of cocktail party chitchat.
By 2025, projections suggest that we will have sequenced the genomes of billions of individuals. This is largely down to the explosive growth in the field of cancer genomics. Steve Jobs, the co-founder of Apple, became one of the early adopters of genomic medicine when he had the cancer that killed him sequenced. Many others will follow. And we will become more and more willing to act on what our genes tell us. Just as the actress Angelina Jolie chose to undergo a double mastectomy to stem her chances of developing breast cancer, society will think nothing of making decisions based on a wide range of genes and gene combinations. Already a study has quantified the ‘Angelina Jolie effect’. Following her public announcement, the number of women turning to DNA testing to assess their risk for familial breast cancer doubled.
We are beginning to think of the DNA on Earth as a whole. All life, including humans, ultimately exists in one system, our Pale Blue Dot. Let’s give it a name. Let’s call the sum of DNA on Earth ‘The Biocode’.
By 2050 we should aim to finally have a handle not only on human genetic diversity but on the biodiversity of the planet. We will have hopefully completed a DNA-based Systema Natura, the work that Linnaeus, the father of taxonomy, first published in 1735. The key question will be how much of the Earth’s genetic legacy will remain. Projects such as the Smithsonian’s Global Genome Initiative are trying to freeze samples from all extant organisms for future DNA sequencing, both to await even cheaper costs and to protect genomes that might become extinct prior to being read.
If all life has DNA and it is interlinked on our planet, then the entire planetary ecology can be likened to a giant computer
By 2100, could the Biocode be significantly synthetic in nature? It is not too far-fetched to suppose that we will see both the rise of industrially produced bespoke creatures and the loss of naturally occurring organisms born without the intervention of a computer.
Perfect Genetic Knowledge
Urbanization has already declared itself the mega-trend of the 21st century, with half the world’s population now living in cities for the first time in human history. While the implications for economic growth have been widely discussed, urbanization’s impact on diplomacy and sovereignty will be equally profound.
Today, numerous cities have substantially more economic weight, international connectivity, and diplomatic influence on the world stage than dozens of nations. The rise of cities as transnational actors is thus driven not only by urbanization and globalization, but also a third nearly irreversible phenomenon: devolution. The period since the end of the Cold War has witnessed not only a major wave of new state births stemming from the collapse of the Soviet Union and disintegration of Yugoslavia, but a much broader entropy as sub-state and provincial authorities leverage the forces of transparency, identity, and connectivity to push for greater autonomy. Quebec, the Basque country, Flanders, Greenland, Scotland, and Catalonia are among the hundreds of examples of provincial entities asserting their domestic autonomy and international credentials. Australia and Canada have also witnessed the conspicuous rise of local councils such as the City of Sydney or Vancouver City Council, both now far more advanced in developing integrated environmental strategies than their respective federal governments.
Nations are no longer driving globalization—cities are
This is a MUST VIEW video (about 1 hour) by Harry Collins, for anyone concerned with knowledge - acquisition, practice, community, language and sharing of knowledge.
Tacit Knowledge, Interactional Expertise and the Imitation Game
Harry Collins is a sociologist of science, professor and director of the Center for the Study of Knowledge, Expertise and Science of the Cardiff University in Wales. A major figure of sociology, he is the author of twenty books that have profoundly changed our understanding of science and technology. It will present a "Tacit Knowledge, Expertise Interactional and the Imitation Game" titled conference on his recent work on the concept of tacit knowledge, and its European project focused on The Imitation Game.
He has spent many many years engaged in
Here is a great paper by Collins that help elaborate how important ‘interaction expertise’ is and how it enables knowledge to flow. It helps ground ‘conversation theory’ and ‘social tacit knowledge’ - among other concepts.
Quantifying the Tacit: The Imitation Game and Social Fluency
Investigating cultural differences
How different social groups understand the world is a central topic of sociological inquiry. One component of these differences can be described in terms of `tacit knowledge’, that is those things that members of a social group know but cannot say.[i] Tacit knowledge can be transferred only by social interaction, unlike explicit knowledge which can be transferred in the form of books, computer files and the like. It could, therefore, be said that to an important extent it is its shared tacit knowledge that makes a group a social group and is what separates one `taken-for-granted reality,’ (Schutz, 1964) or one `form-of-life,’ (Wittgenstein, 1953; Winch, 1958), or one `social collectivity,’ (Durkheim, 1915), or one `paradigm’ (Kuhn, 1962), or one culture (Kluckhohn, 1962, Geertz, 1973), or one subculture, (Yinger, 1982), or one `microculture,’ or `ideoculture’ (Fine, 2007), or one technical expertise (Collins, 1990, 2010; MacKenzie and Spinardi 1995) from another.
What we propose here is a new research method – the Imitation Game – that provides a systematic and quantifiable way to investigate such differences along with new approach to the qualitative exploration of cultural worlds.[ii] Although this is work in progress we are able to report some striking results with strong statistical significance. We also suggest that, because the method uses local actors as `proxy researchers’, Imitation Games automatically control for the way cultural divisions are differently expressed in different societies or in the same society at different times. Not only does this allow quantitative comparisons to be made with a greater degree of confidence than might otherwise be possible, the qualitative data that is generated at the same time enables these different modes of expression to be explored.
The next section of the paper sets out the idea of `interactional expertise’ that underpins the Imitation Game method and describes its links with the wider sociological literature. After this, we briefly describe the method itself (see Collins et al, 2006 for an extended discussion) before summarising the empirical data we have gathered to date. The paper concludes by discussing the links between the Imitation Game and social research more generally.
Interactional expertise
Interactional expertise is expertise in the language of a culture or community that is acquired through sustained immersion in the linguistic discourse but without engaging in the associated physical practices; mastery of the physical practices is known as `contributory expertise.’[iii] To give an example, after decades of doing deep fieldwork with gravitational-wave scientists, Harry Collins claims to have interactional expertise in gravitational wave detection physics. This means he can talk and answer questions as if he were a gravitational wave physicist, even though he has never done any work on the detectors, has made no original calculations and is unlikely to ever be a co-author of gravitational wave paper published in a physics journal. Collins’s claim is that, despite having no contributory expertise, the interactional expertise acquired as a result of immersion in the spoken discourse of the gravitational wave community is enough to enable him to make judgments in respect of technical matters which are not dissimilar from those made by full-blown gravitational wave physicists.[iv] Extrapolating this idea to other settings suggests that it may not be necessary for, say, men to live as women, or gay people to live as straight people, in order for them to make similar judgements to those from the other culture. Instead, all that matters is that they are sufficiently immersed in the relevant linguistic discourse for long enough. If this is correct, then the possession of interactional expertise can be used as an indicator of immersion in a discourse and an understanding of the society to which it pertains.
As we all know - generating trustworthy knowledge is hard - and has a famous saying goes - ‘we have met the enemy and he is us’. This is a worthwhile Nature article on the topic.
How scientists fool themselves – and how they can stop
Humans are remarkably good at self-deception. But growing concern about reproducibility is driving many researchers to seek ways to fight their own worst instincts.
In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.
Gelman immediately published a three-sentence correction, declaring that everything in the paper's crucial section should be considered wrong until proved otherwise.
Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something's got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it's easier to miss it.”
This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis, co-director of the Meta-Research Innovation Center at Stanford University in Palo Alto, California. The issue goes well beyond cases of fraud. Earlier this year, a large project that attempted to replicate 100 psychology studies managed to reproduce only slightly more than one-third. In 2012, researchers at biotechnology firm Amgen in Thousand Oaks, California, reported that they could replicate only 6 out of 53 landmark studies in oncology and haematology. And in 2009, Ioannidis and his colleagues described how they had been able to fully reproduce only 2 out of 18 microarray-based gene-expression studies.
Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis, co-director of the Meta-Research Innovation Center at Stanford University in Palo Alto, California. The issue goes well beyond cases of fraud. Earlier this year, a large project that attempted to replicate 100 psychology studies managed to reproduce only slightly more than one-third. In 2012, researchers at biotechnology firm Amgen in Thousand Oaks, California, reported that they could replicate only 6 out of 53 landmark studies in oncology and haematology. And in 2009, Ioannidis and his colleagues described how they had been able to fully reproduce only 2 out of 18 microarray-based gene-expression studies.
This is another interesting - though perhaps not momentous milestone in the emergence of the digital environment.
OCLC prints last library catalog cards
Today’s library users have access to the world’s knowledge through digital library networks
OCLC printed its last library catalog cards today, officially closing the book on what was once a familiar resource for generations of information seekers who now use computer catalogs and online search engines to access library collections around the world.
This final print run marked the end of a service that has steadily decreased over the past few decades as libraries have moved their catalogs online.
"Print library catalogs served a useful purpose for more than 100 years, making resources easy to find within the walls of the physical library," said Skip Prichard, OCLC President and CEO. "Today, libraries are taking their collections to their users electronically, wherever they may be. Collections are searchable online, and through popular search engines. Thousands of libraries are globally interconnected through high-speed networks, and in many cases their digitized collections are available and accessible to readers and researchers from anywhere in the world."
As a leading global library cooperative, OCLC provides the shared technology services, original research and programs libraries need to better fuel learning, research and innovation. Through OCLC, member libraries cooperatively produce and maintain WorldCat, the world's most comprehensive global network of data about library collections and services.
OCLC built the world's first online shared cataloging system in 1971 and, over decades, merged the catalogs of thousands of libraries through a computer network and database. That database, now known as WorldCat, not only made it possible for libraries to catalog cooperatively, but also to share resources held in other libraries on the network. It also made it possible for libraries to order custom-printed catalog cards that would be delivered to the library already sorted and ready to be filed.
For anyone worried about the life-expectancy of ‘Moore’s Law’ here’s some good news.
'Major' IBM breakthrough breathes new life into Moore’s Law
Silicon is dead. Long live, carbon nanotubes.
In transistors, size matters — a lot. You can’t squeeze more silicon transistors (think billions of them) into a processor unless you can make them smaller, but the smaller these transistors get, the higher the resistance between contacts, which means the current can’t flow freely through them and, in essence, the transistors and chips built based on them, can no longer do their jobs. Ultra-tiny carbon nanotube transistors, though are poised to solve the size issue.
In a paper published on Thursday in the journal Science, IBM scientists announced they had found a way to reduce the contact length of carbon nanotube transistors — a key component of the tech and the one that most impacts resistance — down to 9 nanometers without increasing resistance at all. To put this in perspective, contact length on traditional, silicon-based 14nm node technology (something akin to Intel’s 14nm technology) currently sits at about 25 nanometers.
"In the silicon space, the contact resistance is very low if the contact is very long. If contact is very short, the resistance shoots up very quickly and gets large. So you have trouble getting current through the device,” Wilfried Haensch, IBM Senior Manager, Physics & Materials for Logic and Communications, told me.
...These results put the world one step closer to carbon nanotube-based integrated circuits. Such chips could conceivably run at the same speed as current transistors, but use significantly less power.
At maximum power, though, Haensch told me, these carbon nanotube chips could run at significantly faster speeds. Not only does this promise a future of ever faster computers, but it could lead to considerably better battery life for your most trusted companion — the smartphone.
And while the topic of Moore’s Law is in view - here’s something that may even accelerate it.
Crucial hurdle overcome in quantum computing
The significant advance, by a team at the University of New South Wales (UNSW) in Sydney appears today in the international journal Nature.
"What we have is a game changer," said team leader Andrew Dzurak, Scientia Professor and Director of the Australian National Fabrication Facility at UNSW.
"We've demonstrated a two-qubit logic gate - the central building block of a quantum computer - and, significantly, done it in silicon. Because we use essentially the same device technology as existing computer chips, we believe it will be much easier to manufacture a full-scale processor chip than for any of the leading designs, which rely on more exotic technologies.
"This makes the building of a quantum computer much more feasible, since it is based on the same manufacturing technology as today's computer industry," he added.
The advance represents the final physical component needed to realise the promise of super-powerful silicon quantum computers, which harness the science of the very small - the strange behaviour of subatomic particles - to solve computing challenges that are beyond the reach of even today's fastest supercomputers.
Many cities are experiencing transportation disruption as UBER advances it’s distributed crowdsourcing model. But there’s a shadow looming over UBER and the incumbent taxi industry as well. This could just as easily morph into new forms of public mass transit - displacing both forms of corporate delivery of transportation services.
Report: Japan to Test Autonomous Taxis
Japan will begin testing autonomous taxis—carrying human passengers—on public roads next year, and hopes to show off its technology by the 2020 Olympic Games in Tokyo.
Announced Thursday by the Japanese government and Robot Taxi Inc., the trial will initially serve about 50 residents in the Kanagawa prefecture, shuttling them between their home and local grocery stores, according to The Wall Street Journal.
The cabs—retrofitted versions of Toyota's Estima hybrid minivan—will drive about two miles, part of which will be major city roads; two co-pilots will be present during test drives, in case of emergency.
There is no word on how many vehicles will be deployed during the testing phase. But if the program is successful, Robot Taxi expects to have a fully commercial service running within five years—just in time to host the Tokyo Summer Olympics.
There’s been so much written on the looming emergence of robotics and automation in the last few years - here’s another article from the Boston Consulting Group - that is worth the read - even for those convinced that -Yes we are on the verge of a revolution.
The Robotics Revolution: The Next Great Leap in Manufacturing
The real robotics revolution is ready to begin. Many industries are reaching an inflection point at which, for the first time, an attractive return on investment is possible for replacing manual labor with machines on a wide scale. We project that growth in the global installed base of advanced robotics will accelerate from around 2 to 3 percent annually today to around 10 percent annually during the next decade as companies begin to see the economic benefits of robotics. In some industries, more than 40 percent of manufacturing tasks will be done by robots. This development will power dramatic gains in labor productivity in many industries around the world and lead to shifts in competitiveness among manufacturing economies as fast adopters reap significant gains.
A confluence of forces will power the robotics takeoff. The prices of hardware and enabling software are projected to drop by more than 20 percent over the next decade. At the same time, the performance of robotics systems will improve by around 5 percent each year. As robots become more affordable and easier to program, a greater number of small manufacturers will be able to deploy them and integrate them more deeply into industrial supply chains. Advances in vision sensors, gripping systems, and information technology, meanwhile, are making robots smarter, more highly networked, and immensely more useful for a wider range of applications. All of these trends are occurring at a time when manufacturers in developed and developing nations alike are under mounting pressure to improve productivity in the face of rising labor costs and aging workforces.
To assess the potential impact of the coming robotics revolution on industries and national competitiveness, The Boston Consulting Group conducted an extensive analysis of 21 industries in the world’s 25 leading manufacturing export economies, which account for more than 90 percent of global trade in goods. We analyzed five common robot setups to understand the investment, cost, and performance of each. We examined every task in each of those industries to determine whether it could be replaced or augmented by advanced robotics or whether it would likely remain unchanged. After accounting for differences in labor costs, productivity, and mix by industry in each country, we developed a robust view of more than 2,600 robot-
industry-country combinations and the likely rate of adoption in each.
But let’s not simply reduce AI and automation to mechanistic or logical work and processes. Of course, we all have come to accept that beauty is in the eye of the beholder - but maybe this article is hinting as the approaching success of AI passing the artistic Turing Test. The images are worth the view.
Intelligent Machines: AI art is taking on the experts
In a world where machines can do many things as well as humans, one would like to hope there remain enclaves of human endeavour to which they simply cannot aspire.
Art, literature, poetry, music - surely a mere computer without world experience, moods, memories and downright human fallibility cannot create these.
Meet Aaron, a computer program that has been painting since the 1970s - big dramatic, colourful pieces that would not look out of place in a gallery.
The "paintings" Aaron does are realised mainly via a computer program and created on a screen although, when his work began being exhibited, a painting machine was constructed to support the program with real brushes and paint.
Aaron does not work alone of course. His painting companion is Harold Cohen, who has "spent half my life trying to get a computer program to do what only rather talented human beings can do".
A painter himself, he became interested in programming in the late 1960s at the same time as he was pondering his own art and asking whether it was possible to devise a set of rules and then "almost without thinking" make the painting by following the rules.
This is a Must Read summary article about where the efforts around domesticating DNA is and where it may well be very soon from now.
Perfect Genetic Knowledge
Human genomics is just the start: the Earth has 50 billion tons of DNA. What happens when we have the entire biocode?
In case you weren’t paying attention, a lot has been happening in the science of genomics over the past few years. It is, for example, now possible to read one human genome and correct all known errors. Perhaps this sounds terrifying, but genomic science has a track-record in making science fiction reality. ‘Everything that’s alive we want to rewrite,’ boasted Austen Heinz, the CEO of Cambrian Genomics, last year.
It was only in 2010 that Craig Venter’s team in Maryland led us into the era of synthetic genomics when they created Synthia, the first living organism to have a computer for a mother. A simple bacterium, she has a genome just over half a million letters of DNA long, but the potential for scaling up is vast; synthetic yeast and worm projects are underway.
Two years after the ‘birth’ of Synthia, sequencing was so powerful that it was used to extract the genome of a newly discovered, 80,000-year-old human species, the Denisovans, from a pinky bone found in a frozen cave in Siberia. In 2015, the United Kingdom became the first country to legalise the creation of ‘three-parent babies’ – that is, babies with a biological mother, father and a second woman who donates a healthy mitochondrial genome, the energy producer found in all human cells.
Commensurate with their power to change biology as we know it, the new technologies are driving renewed ethical debates. Uneasiness is being expressed, not only among the general public, but also in high-profile articles and interviews by scientists. When China announced it was modifying human embryos this April, the term ‘CRISPR/Cas9’ trended on the social media site Twitter. CRISPR/Cas9, by the way, is a protein-RNA combo that defends bacteria against marauding viruses. Properly adapted, it allows scientists to edit strings of DNA inside living cells with astonishing precision. It has, for example, been used to show that HIV can be ‘snipped’ out of the human genome, and that female mosquitoes can be turned male to stop the spread of malaria (only females bite).
But one of CRISPR’s co-developers, Jennifer Doudna of the University of California in Berkeley, has ‘strongly discouraged’ any attempts to edit the human genome pending a review of the ethical issues. Well, thanks to China, that ship has sailed. Indeed, now the technology appears to be finding its way into the hands of hobbyists: Nature recently reported that members of the ‘biohacker’ sub-culture have been messing around with CRISPR, though the enthusiast they interviewed didn’t appear to have a clear idea of what he wanted to do with it.
And here’s another indication of the acceleration of domesticating DNA.
Gene-editing record smashed in pigs
Researchers modify more than 60 genes to enable organ transplants into humans
For decades, scientists and doctors have dreamed of creating a steady supply of human organs for transplantation by growing them in pigs. But concerns about rejection by the human immune system and infection by viruses embedded in the pig genome have stymied research. By modifying more than 60 genes from pig embryos — ten times more than have been edited in any other animal — researchers believe they may have produced a suitable non-human organ donor.
The work was presented on 5 October at a meeting of the National Academy of Sciences (NAS) on human gene editing. Geneticist George Church of Harvard Medical School in Boston, Massachusetts, announced that he and colleagues had used CRISPR gene-editing technology to inactivate 62 porcine endogenous retroviruses (PERVs) in pig embryos. These viruses are embedded in all pigs’ genomes and cannot be treated or neutralized. It is feared that they could cause disease in human transplant recipients.
Church’s group also modified more than 20 genes in a separate set of embryos, including genes encoding proteins that sit on the surface of pig cells and are known to trigger the human immune system or cause blood clotting. He declined to reveal the exact genes, however, as the work is unpublished. Eventually, pigs intended for organ transplants will have both these modifications and the PERV deletions.
Here is an interesting research finding - a potentially new ecology for styrofoam and plastic. The future of the economy is
Plastic-eating worms may offer solution to mounting waste, Stanford researchers discover
An ongoing study by Stanford engineers, in collaboration with researchers in China, shows that common mealworms can safely biodegrade various types of plastic.
Consider the plastic foam cup. Every year, Americans throw away 2.5 billion of them. And yet, that waste is just a fraction of the 33 million tons of plastic Americans discard every year. Less than 10 percent of that total gets recycled, and the remainder presents challenges ranging from water contamination to animal poisoning.
Enter the mighty mealworm. The tiny worm, which is the larvae form of the darkling beetle, can subsist on a diet of Styrofoam and other forms of polystyrene, according to two companion studies co-authored by Wei-Min Wu, a senior research engineer in the Department of Civil and Environmental Engineering at Stanford. Microorganisms in the worms' guts biodegrade the plastic in the process – a surprising and hopeful finding.
"Our findings have opened a new door to solve the global plastic pollution problem," Wu said.
The papers, published in Environmental Science and Technology, are the first to provide detailed evidence of bacterial degradation of plastic in an animal's gut. Understanding how bacteria within mealworms carry out this feat could potentially enable new options for safe management of plastic waste.
In the lab, 100 mealworms ate between 34 and 39 milligrams of Styrofoam – about the weight of a small pill – per day. The worms converted about half of the Styrofoam into carbon dioxide, as they would with any food source.
Within 24 hours, they excreted the bulk of the remaining plastic as biodegraded fragments that look similar to tiny rabbit droppings. Mealworms fed a steady diet of Styrofoam were as healthy as those eating a normal diet, Wu said, and their waste appeared to be safe to use as soil for crops.
On the energy front here’s one more article pointing to the looming shift in energy paradigm.
Solar and Wind Just Passed Another Big Turning Point
It has never made less sense to build fossil fuel power plants.
Wind power is now the cheapest electricity to produce in both Germany and the U.K., even without government subsidies, according to a new analysis by Bloomberg New Energy Finance (BNEF). It's the first time that threshold has been crossed by a G7 economy.
But that's less interesting than what just happened in the U.S.
To appreciate what's going on there, you need to understand the capacity factor. That's the percentage of a power plant's maximum potential that's actually achieved over time.
Consider a solar project. The sun doesn't shine at night and, even during the day, varies in brightness with the weather and the seasons. So a project that can crank out 100 megawatt hours of electricity during the sunniest part of the day might produce just 20 percent of that when averaged out over a year. That gives it a 20 percent capacity factor.
One of the major strengths of fossil fuel power plants is that they can command very high and predictable capacity factors. The average U.S. natural gas plant, for example, might produce about 70 percent of its potential (falling short of 100 percent because of seasonal demand and maintenance). But that's what's changing, and it's a big deal.
For the first time, widespread adoption of renewables is effectively lowering the capacity factor for fossil fuels. That's because once a solar or wind project is built, the marginal cost of the electricity it produces is pretty much zero—free electricity—while coal and gas plants require more fuel for every new watt produced. If you're a power company with a choice, you choose the free stuff every time.
the shift illustrates a serious new risk for power companies planning to invest in coal or natural-gas plants. Historically, a high capacity factor has been a fixed input in the cost calculation. But now anyone contemplating a billion-dollar power plant with an anticipated lifespan of decades must consider the possibility that as time goes on, the plant will be used less than when its doors first open
And if the above article seems to be missing energy storage in its calculus - here’s an article for thought.
How batteries for homes and businesses could already make economic sense
Last week, a study in Energy Policy gave a new glimpse into why, despite much anticipation, the so-called energy storage revolution — in which adding batteries or other energy storing devices to the grid lets us decide when we actually want to use electricity that has been generated — has been stalled. In particular, the research found that cheap natural gas has presented direct economic competition to two major uses of energy storage on the large-scale electric grid, rendering storage less profitable.
But if new research from the energy think tank the Rocky Mountain Institute is correct, that still leaves eleven major possible uses for one leading form of energy storage – batteries. The study finds 13 separate ways that batteries could help the electric grid and electricity consumers, and argues that thus far, power companies and other users have failed to adequately recognize how many of these uses can be paired together, giving batteries multiple roles and thereby, much larger economic value.
The result is that even where they’re installed, we’ve often been under-utilizing batteries — and that the devices might already be a smart economic bet for power companies (and customers) if employed to their max. “Under prevailing cost structures, if batteries are allowed to deliver a stack of services to both customers and the utility, they could be cost effective today,” says the Rocky Mountain Institute’s Jesse Morris, an author of the new report.
This article is discussion something that has the potential to be ‘one of those weak signals’ something no one with practical common sense would invest much thinking time in - but ….. revolutions do happen and in this light - the Basic Guaranteed Income is worth attention and monitoring.
The Cryptocurrency-Based Projects That Would Pay Everyone Just for Being Alive
Creating a decent basic income system now, even if it had only a little money in it at first, would give every potential beneficiary an incentive to see it grow.
For a policy proposal that has an approximately 0 percent chance of being passed by the US Congress right now, universal basic income is an uncommonly hot topic. The idea that everybody should receive a paycheck simply for being alive is fast becoming a darling of several factions: tech investors who expect millions of jobs to be automated out of existence, libertarians who see it as a way to avoid the inefficiencies of the traditional welfare state, and certain lefties who have embraced it as part of a way to separate government benefits from work. Dylan Matthews of Vox has produced a string of primers on the subject, and The Atlantic recently profiled a Reddit-famous basic income advocate. Yet the idea goes against everything Republicans are supposed to believe about no free rides, as well as Democrats’ preference for welfare programs designed from on high. Even if there were solid mainstream support for basic income, Congress can barely do anything these days, much less consider a wholesale redistribution of wealth.
This may not be such a dead end for the concept as it seems. Over the past few months, basic income advocates tinkering with Bitcoin and other online currencies have created a series of experiments under the premise that we can start playing with basic income now, whether the government gets in on it or not.
We all know ‘intellectually’ that all currencies are essentially a form of circulating debt. We all have some sort of ‘moral frame’ that shapes how we reason about debt and interestingly we don’t seem to apply that frame with the same rigor when reasoning about the importance of credit. This article is meant to complement the preceding one.
“Don’t Owe. Won’t Pay.” Everything You’ve Been Told About Debt Is Wrong
With the nation’s household debt burden at $11.85 trillion, even the most modest challenges to its legitimacy have revolutionary implications.
The legitimacy of a given social order rests on the legitimacy of its debts. Even in ancient times this was so. In traditional cultures, debt in a broad sense—gifts to be reciprocated, memories of help rendered, obligations not yet fulfilled—was a glue that held society together. Everybody at one time or another owed something to someone else. Repayment of debt was inseparable from the meeting of social obligations; it resonated with the principles of fairness and gratitude.
The moral associations of making good on one’s debts are still with us today, informing the logic of austerity as well as the legal code. A good country, or a good person, is supposed to make every effort to repay debts. Accordingly, if a country like Jamaica or Greece, or a municipality like Baltimore or Detroit, has insufficient revenue to make its debt payments, it is morally compelled to privatize public assets, slash pensions and salaries, liquidate natural resources, and cut public services so it can use the savings to pay creditors. Such a prescription takes for granted the legitimacy of its debts.
Today a burgeoning debt resistance movement draws from the realization that many of these debts are not fair. Most obviously unfair are loans involving illegal or deceptive practices—the kind that were rampant in the lead-up to the 2008 financial crisis. From sneaky balloon interest hikes on mortgages, to loans deliberately made to unqualified borrowers, to incomprehensible financial products peddled to local governments that were kept ignorant about their risks, these practices resulted in billions of dollars of extra costs for citizens and public institutions alike.
A movement is arising to challenge these debts. In Europe, the International Citizen debt Audit Network (ICAN) promotes “citizen debt audits,” in which activists examine the books of municipalities and other public institutions to determine which debts were incurred through fraudulent, unjust, or illegal means. They then try to persuade the government or institution to contest or renegotiate those debts. In 2012, towns in France declared they would refuse to pay part of their debt obligations to the bailed-out bank Dexia, claiming its deceptive practices resulted in interest rate jumps to as high as 13 percent. Meanwhile, in the United States, the city of Baltimore filed a class-action lawsuit to recover losses incurred through the Libor rate-fixing scandal, losses that could amount to billions of dollars.
I’ve been anticipating this since I first heard of 3D printing - and imagined the day would not be far off when mass produced shoes would be a thing of the ‘dark ages’. Soon everyone can have inexpensive personalized orthotic shoes available at any mall.
Nike’s COO thinks we could soon 3D print Nike sneakers at home
In theory, 3D printing offers a future where you could easily print just about anything you want. So far, it’s failed to be the miracle consumers were promised, but there’s one believer who’s worth paying attention to.
Eric Sprunk, Nike’s COO, recently attended a summit held by tech news site GeekWire, where he talked about the innovation in Nike’s Flyknit technology and what it suggests about the way sneakers could be made in the future. Based on what Nike is already doing with Flyknit, Sprunk says the ability for consumers to 3D print a pair of sneakers is close at hand.
The way it might work goes something like this: You could head to Nike’s website, customize a sneaker to your specifications, and buy a file containing the instructions for the 3D printer. If you have a printer at home, you could print it yourself and have a new pair of sneakers in a matter of hours. If you don’t, you could take the file to a Nike store and have them print it for you.