As the decade wore on, there were tremors of unease. The industry was running out of albums to reissue, battling over price with supermarkets and big-box retailers, and disturbed by the introduction of CD burners.
“Arguably, it’s why they missed the MP3, because they were so concerned about compact-disc burners,” says Witt. “If you read corporate literature about forward-facing risks to the business in the late 90s, this is one of the top things they’re talking about, if not the top. And the impact was real. If bootleg discs flood the market they kill sales, no question about it.”
Bootleg CDs were a danger the industry could get its head around – you could hold one in your hand. What it couldn’t comprehend was the threat of the MP3: the idea that music could transcend physical formats. “That happened for two reasons,” says Witt. “One was they were enjoying unbelievable profits. Two, the studio engineers hated the way the MP3 sounded and refused to engage with it. A lot of artists hated the way it sounded, too.” What the audiophiles didn’t realise was that most consumers couldn’t tell the difference. “What was the audio experience before the compact disc?” says Witt. “It was cheap vinyl or an AM transistor radio on the beach, and MP3 sounds better than either of those.”
Rougvie suggests a third reason: fierce resistance from retailers who, understandably, considered the MP3 an existential threat. “Distributors and record stores were threatening to return every Ryko title they had, just because we were selling 10 or 12 MP3s every week. If that’s what we were feeling, I can only imagine what kind of pushback EMI or Warners were getting.”
Just like their predecessors in Greece in 1982, 90s executives were too busy worrying about the next quarter to consider the next decade. The status quo was perfect, until it wasn’t. “My biggest bugbear about this industry is that they all think short-term,” says Webster. “Nobody ever thinks long-term. All these executives were sitting there being paid huge bonuses on increased profits and they didn’t care. I don’t think anyone saw it coming. I remember the production guy at Virgin saying, ‘In a few years, you’re going to be able to carry all the music you want around on something the size of a credit card.’ And we all laughed. Don’t be ridiculous! How can you do that?”
How the compact disc lost its shine
“It’s not that we didn’t think about security,” Clark recalled. “We knew that there were untrustworthy people out there, and we thought we could exclude them.”
… the young newcomers vs. the establishment — but also technological ones. Telephone networks, it was often said, had an intelligent core — the switches that ran everything — and “dumb” edges, meaning the handsets in nearly every home and business in the nation. The Internet, by contrast, would have a “dumb” core — all the network did was carry data — with intelligent edges, meaning the individual computers controlled by users.
A “dumb” core offered few opportunities for centralized forms of security but made it easy for new users to join. This model worked so long as the edges were controlled by colleagues who shared motives and a high degree of trust. But that left the edges with a responsibility to serve as gatekeepers to the network.
“We’ve ended up at this place of security through individual vigilance,” said Abbate, the Virginia Tech historian. “It’s kind of like safe sex. It’s sort of ‘the Internet is this risky activity, and it’s up to each person to protect themselves from what’s out there.’ . . . There’s this sense that the [Internet] provider’s not going to protect you. The government’s not going to protect you. It’s kind of up to you to protect yourself.”
The Internet’s founders saw its promise
but didn’t foresee users attacking one another
The question of access to the Internet as a human right is increasingly self-evident if we desire people to be able to fully participate in society. But our access and use of the Internet needs more support if we are going to have a robust and democratic society in the digital environment.
UN report: encryption and anonymity online are necessary to advance human rights
Encryption and anonymity on the internet are necessary for the advancement of human rights, according to a new report from the United Nations. The report from David Kaye, a UN special rapporteur on freedom of expression, asserts that privacy is a “gateway for freedom of opinion and expression.” Soon to be presented before the UN’s Human Rights Council, the report examines in detail how encryption and anonymity impact freedom of opinion and expression.
The report concludes that encryption and anonymity “deserve strong protection” because they “enable individuals to exercise their rights to freedom of opinion and expression in the digital age.”
“States should not restrict encryption and anonymity,” Kaye writes. He notes that “blanket prohibitions fail to be necessary and proportionate,” a reference to standards for human rights law as articulated in the 13 International Principles on the Application of Human Rights to Communications Surveillance. The 13 Principles are “compelling demonstrations of the law that should apply in the context of the digital age,” observes Kaye.
Kaye also calls out specific practices that threaten user rights. “States should avoid all measures that weaken the security individuals may enjoy online, such as through backdoors, weak encryption standards and key escrows.”
Privacy rights are under daily attack. This year in Belarus, the Communications Ministry imposed a block on internet anonymizers including Tor, an anonymizer that China has successfully blocked. A report released Tuesday revealed that the Canadian telco Rogers is collaborating with ETSI, a major telecommunications standards body, in order to undermine “the core security design decisions” of end-to-end encryption. Meanwhile, a debate continues in the U.S. on requiring “backdoors” in technology to facilitate government access to users’ devices and data.
The UN Report can be found here as a 20 page .doc:
Speaking about the Internet here’s a great article about it’s inherent flaw.
The Internet’s founders saw its promise
but didn’t foresee users attacking one another
NET OF INSECURITY A flaw in the design
David D. Clark, an MIT scientist whose air of genial wisdom earned him the nickname “Albus Dumbledore,” can remember exactly when he grasped the Internet’s dark side. He was presiding over a meeting of network engineers when news broke that a dangerous computer worm — the first to spread widely — was slithering across the wires.
One of the engineers, working for a leading computer company, piped up with a claim of responsibility for the security flaw that the worm was exploiting. “Damn,” he said. “I thought I had fixed that bug.”
But as the attack raged in November 1988, crashing thousands of machines and causing millions of dollars in damage, it became clear that the failure went beyond a single man. The worm was using the Internet’s essential nature — fast, open and frictionless — to deliver malicious code along computer lines designed to carry harmless files or e-mails.
Decades later, after hundreds of billions of dollars spent on computer security, the threat posed by the Internet seems to grow worse each year. Where hackers once attacked only computers, the penchant for destruction has now leapt beyond the virtual realm to threaten banks, retailers, government agencies, a Hollywood studio and, experts worry, critical mechanical systems in dams, power plants and aircraft.
This is an interesting discussion of the transition at a company called Zappos - which has decided to implement an organizational architecture called Holocracy. This is worth the read - although the transition to a new form of governance is far from smooth - it is an important experiment - sure to provide many lessons.
Is Holacracy Succeeding At Zappos?
What’s going on at Zappos with its implementation of Holacracy? “Confusion,” says the Wall Street Journal. “Employees don’t seem too happy,” says Slate. “No boss, no title, no dice,” says CNBC. “It’s either the future of management,” says Aimee Groth at Quartz, “or a social experiment gone awry.”
In January 2014, my article here described how and why Zappos had embraced the management scheme known as Holacracy and expected to complete implementation by end-2014.
Not everything has gone according to plan. Critics of Holacracy—traditional managers who basically want Holacracy to fail— have seized on the fact that CEO Tony Hsieh said recently that his company had not “made fast enough progress towards self-management.” They were delighted to hear that when employees were offered several months’ severance pay to leave if they didn’t embrace what was happening, 14% opted to leave–a huge increase on the normal 1% attrition rate.
“Holacracy’s biggest value,” says Groth quoting Harvard management professor Ethan Bernstein, who has studied Zappos and other companies that are pursuing new modes of organization, is that it provides a framework for effective conflict resolution: “Holacracy replaces that [traditional] structure with a structuring process, at least for particularly frequent kinds of conflicts, to resolve conflicts in a potentially less-hierarchical, more self-organized, and more adaptive fashion.”
It’s too early to say whether the structuring process of Holacracy will succeed at Zappos. Hsieh’s aspiration is noble—to make Zappos more efficient and more able to “deliver happiness.” Time will tell whether it will help Zappos become more agile to achieve this—or the opposite.
However, here’s an interesting moment in the sharing economy.
Airbnb Is Approaching One Million Guests Per Night
Nearly one million people will be staying at an Airbnb on any given night this summer, according to CEO Brian Chesky.
Chesky’s seven-year-old startup, which was valued at $13 billion back in October, is growing very quickly. Airbnb is in 34,000 cities around the world, and more than 800,000 people will stay in an Airbnb every night this summer, up from just half a million now.
Airbnb’s business allows people to rent out their homes and spare bedrooms to travelers around the world. Chesky says the company has had some notable successes lately — during the 2014 World Cup in Brazil, for instance. Chesky claimed that 20 percent of visitors to Brazil for the tournament stayed in an Airbnb, more than 120,000 people.
Here a very interesting advance in machine learning - a must read for anyone interested in psychological health of those working in fields of high stress. The very fact of a computer is involved seems to make it easier for patients. It’s a 5 min audio and a short read. There is also a very nice gif of the analytical process.
How A Machine Learned To Spot Depression
I'm in a booth with a computer program called Ellie. She's on a screen in front of me.
Ellie was designed to diagnose post-traumatic stress disorder and depression, and when I get into the booth she starts asking me questions — about my family, my feelings, my biggest regrets.
Emotions seem really messy and hard for a machine to understand. But Skip Rizzo, a psychologist who helped design Ellie, thought otherwise.
When I answer Ellie's questions, she listens. But she doesn't process the words I'm saying. She analyzes my tone. A camera tracks every detail of my facial expressions.
This article is another movement in the evolution of Watson. It making a clear trajectory toward the future of Knowledge management - as a form of external memory that can also incorporate Big Data analytics. Watson is not an AI that will take over the world - it’s showing how we will enhance human performance and creativity through a synthesis.
IBM Watson gets smart in the oil refinery business
Australian energy company Woodside Energy hopes to strike a gusher in its own backyard -- a gusher of information, that is.
The company plans to improve the efficiency of its refinery plants by using IBM's Watson cognitive computing services to gather and share insights from its own engineers.
Woodside "wants to take the knowledge of their senior engineers and make it available, with the support of Watson, to a broader range of employees," said Ed Harbour, IBM vice president of the company's Watson Group.
The two companies will partner to create a search engine, or "cognitive advisory service," that can be used by Woodside's engineering teams to ask complex questions about facilities management and design, according to IBM. Energy companies such as Woodside must routinely revamp production facilities to improve their efficiency.
For this system, an engineer could ask Watson, for instance, why a pressure gauge reading is not within a standard range of values. Watson can probe its memory banks and return what it considers to be the most relevant set of answers. Over time, it can improve the quality of the answers based on user feedback.
The idea behind the service is to gather into a central location all the information engineers have on running Woodside's refineries. The engineers have already digitally documented many aspects of running refineries, which Watson can ingest. Additionally, the two companies will interview senior engineers to collect more institutional information.
This is a fantastic Brilliant development. The visual presentation shows the progress being made in the interface or the Internet of Things - a Must See.
prosthetic knowledge
Information that a person does not know, but can access as needed using technology
There is also a 4 min video about Google’s Project Soli.
Welcome to Project Soli
https://www.youtube.com/watch?v=0QNiZfSsPc0
Project Soli is developing a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.
The issue of increasing power of Watson and other forms of AI powered by Big Data and algorithmic systems - is moving beyond a Watson - remember the network is the computer. This is a must read - for anyone interested in the future of security, surveillance, privacy and more.
The Violence of Algorithms
Why Big Data Is Only as Smart as Those Who Generate It
In December 2010, I attended a training session for an intelligence analytics software program called Palantir. Co-founded by Peter Thiel, a techno-libertarian Silicon Valley billionaire, Palantir is a slick tool kit of data visualization and analytics capabilities marketed to and widely used by the NSA, the FBI, the CIA, and other U.S. national security and policing institutions.
The training session took place in Tyson's Corner, in Washington, D.C., at a Google-esque office space complete with scooters, a foosball table, and a kitchen stocked with energy drinks. I was taking the course to explore the potential uses of the tool for academic research.
We spent the day conducting a demonstration investigation. We were first given a range of data sets and, one by one, we uploaded them into Palantir. Each data set showed us a new analytic capability of the program: thousands of daily intelligence reports were disaggregated to their core pieces of information and correlated with historical data; satellite images were overlaid with socio-economic, air strike, and IED data. And in this process, the promise of Palantir was revealed: with more data comes greater clarity. For analysts who spend their days struggling to interpret vast streams of data, the Palantir demo was an easy sell.
In our final exercise, we added surveillance data detailing the planned movements of a suspected insurgent. Palantir correlated the location and time of these movements with the planned movements of a known bomb maker. And there the training ended. It was quite obvious that the next step, in “real life,” would be violent. The United States would send in a drone or Special Forces team. We in the demo, on the other hand, just went home.
This program raises many challenging questions. Much of the data used was inputted and tagged by humans, meaning that it was chock full of human bias and errors. The algorithms on which the system is built are themselves coded by humans, so they too are subjective. Perhaps most consequentially, however, although the program being demonstrated was intended to inform human decision-making, that need not be the case. Increasingly, such tools, and the algorithms that power them, are being used to automate violence.
Here is an interesting HBR article about the maker movement (e.g. 3D printing) in Africa. This has to be comprehended as a plausible ubiquitous capability.
Africa’s Maker Movement Offers Opportunity for Growth
In a glittering ceremony in Skhirat, Morocco, three African innovators emerged as winners in the annual Innovation Prize for Africa competition. The African Innovation Foundation, which organizes the event, focuses on identifying homegrown and scalable innovations that address local problems in Africa. From Foufoumix (a kitchen appliance to replace a mortar and pestle in fufu food preparation) to mimosa pudica solar (which generates electricity from a weed), they are cultivating a network of grassroots innovators of different levels of experience who offer new ways to fix problems in their communities.
Designers like these have been historically neglected by African policymakers because of the largely unconventional nature of their enterprises. So despite Africa’s recognition as a land of relentless ingenuity with revered tradition of local creativity, the records for taking those sparks of genius to the next level has been discouraging.
In the last few years, though, the continent has been experiencing a redesign across its major cities with strong civic participation in making and building things. These activities are happening in people’s homes, and places which the enthusiasts call makerspaces or hackerspaces. With ubiquitous computing power, freely available APIs, platforms on the web for sharing, and cheap tools like 3-D printers and lasers, the Afrimakers as they are called (or “makers” for short) are finding it easier to demonstrate their ideas. What is unfolding is a virtuoso system with a “started in Africa” mind-set that could potentially remake what Africans buy. This is especially exciting because it empowers people to use their local expertise, know-how, and hands-on skills to solve problems that exist in their daily lives.
So here’s a short article about the famous open-source software and its contributions to ‘everything’. )
11 Ways That Linux Contributes to Tech Innovation
We all know that Linux runs much of modern society – from data centers and mobile phones, to air traffic control and stock exchanges. But what are some of the ways that Linux continues to contribute to innovation in the tech industry?
Over the past six months I've asked new Linux Foundation corporate members on the cutting edge of technology to weigh in on what interesting or innovative trends they're witnessing and the role that Linux plays in them. Here's what engineers, CTOs, and other business leaders from companies including CoreOS, Rackspace, SanDisk, and more had to say.
The full interviews are part of The Companies That Support Linux series. For more on Linux Foundation corporate membership visit: http://www.linuxfoundation.org/about/join/corporate.
Here’s one more signal of the trajectory of AI in the digital environment - the emergence of the network-as-the-computer. .
AI Supercomputer Built by Tapping Data Warehouses for Their Idle Computing Power
Sentient claims to have assembled machine-learning muscle to rival Google by rounding up idle computers.
Recent improvements in speech and image recognition have come as companies such as Google build bigger, more powerful systems of computers to run machine-learning software. Now a relative minnow, a private company called Sentient with only about 70 employees, says it can cheaply assemble even larger computing systems to power artificial-intelligence software.
The company’s approach may not be suited to all types of machine learning, a technology that has uses as varied as facial recognition and financial trading. Sentient has not published details, but says it has shown that it can put together enough computing power to produce significant results in some cases.
Sentient’s power comes from linking up hundreds of thousands of computers over the Internet to work together as if they were a single machine. The company won’t say exactly where all the machines it taps into are. But many are idle inside data centers, the warehouse-like facilities that power Internet services such as websites and mobile apps, says Babak Hodjat, cofounder and chief technology officer at Sentient. The company pays a data-center operator to make use of its spare machines.
Perhaps we can develop some great machine learning - but we have to be very wary of the business models that will be used to make such AI available. Here’s an important ‘lesson learnt’ perhaps. The video that went viral is 30 sec.
Self-parking Volvo ploughs into journalists after owner neglects to pay for extra feature that stops cars crashing into people
A video showing a car attempting to park but actually plowing into journalists might have resulted from the Volvo’s owner not paying an extra fee to have the car avoid pedestrians.
The video, taken in the Dominican Republic, shows a Volvo XC60 reversing itself, waiting, and then driving back into pedestrians at speed. The horrifying pictures went viral and were presumed to have resulted from a malfunction with the car — but the car might not have had the ability to recognise a human at all.
If the above Volve scenario is scary - here’s what’s really scary - scarier than super AI. For anyone interested in science fiction - I’ve recommended Daniel Suarez a number of times - His books ‘Daemon’ & sequel ‘Freedom’ are must reads for anticipating the digital environment. But his third book - ‘Kill Decision’ is scary in anticipating not just the drone - but the autonomous drone. If we thought the Improvised Explosive Device (IED) was terrifying - think about the improvised autonomous explosive drone (IAED), it could be as innocent as an Amazon delivery drone.
WHAT MIGHT A KILLERBOT ARMS RACE LOOK LIKE?
THE COMING SWARM
When they appear on the horizon, the robots to coming kill you won't necessarily look like warplanes. That's limited, human-centric thinking, says Stuart Russell, a computer scientist at the University of California at Berkeley, and it only applies to today's unmanned weapons. Predator and Reaper drones were built with remote pilots and traditional flight mechanics in mind, and armed with the typical weapons of air war--powerful missiles, as useful for destroying buildings and vehicles as personnel. Tomorrow's nimbler, self-piloted armed bots won't simply be updated tools for old-fashioned air strikes. They'll be vectors for slaughter.
More likely, the lethal autonomous weapons systems (LAWS) to come will show up in a cloud of thousands or more. Each robot will be small, cheap, and lightly armed, packing the bare minimum to end a single life at a time. Predicting the exact nature of these weapons is as macabre as it is speculative, but to illustrate how we should adjust our thinking on the subject of deploying autonomous robots on the battlefield, Russell offers two hypotheticals. “It would perhaps be able to fire miniature projectiles, just powerful enough to shoot someone through their eyeball,” he says. “It would be pretty easy to do that from 30 or 40 meters away. Or it could put a shaped charge directly on a person's cranium. One gram of explosives is enough to blow a hole in sheet metal. That would probably be more than enough.”
But here another view.
Robots Could Redefine Hotel Room Service
Fast and friendly delivery bots can bring items to guests in minutes
Room service is one of the many luxuries of staying at a hotel—whether it’s having breakfast, fresh towels, or the daily newspaper delivered to your door. But not all guests want to wait for up to an hour or let hotel staffers inside their rooms. That’s where service bots come in.
Service robot prototypes were put to work at two Aloft hotels in California’s San Francisco Bay Area: the first in August 2014 and the second this past March. Designed by Savioke, a Santa Clara, Calif., manufacturer of autonomous robots for the service industry, the SaviOne prototype was developed 19 months ago by IEEE Member Steve Cousins. He and fellow alumni from Willow Garage—creators of the autonomous PR2 robot, an R&D platform that can be programmed to help humans with household tasks—decided it was time to develop bots that could interact with the public.
“We designed SaviOne to be a useful tool for hotel staff and increase their productivity without it alienating guests,” says Cousins. “To our surprise, guests found the bot delightful. In fact, people have said they’re glad it was a robot and not a person at the door.”
This seems like a fundamental breakthrough in both material and magnetic science. Something that’s changing a basic 175 year old assumption about the nature of magnets - with lots of potential new uses - Perhaps only certain types of geeks will like this - but the important point is that that horizon of fundamental new science is still expanding.
New class of magnets that swell in volume in magnetic field
Researchers at the University of Maryland (UMD) and Temple University, have discovered a new class of magnets that swell in volume when placed in a magnetic field and generate negligible amounts of wasteful heat during energy harvesting.
“Our findings fundamentally change the way we think about a certain type of magnetism that has been in place since 1841,” said Chopra, who also runs the Materials Genomics and Quantum Devices Laboratories in Temple’s College of Engineering.
The researchers and others say this transformative breakthrough has the potential to not only displace existing technologies but create altogether new applications due to the unusual combination of magnetic properties.
“Chopra and Wuttig’s work is a good example of how basic research advances can be true game changers,” said Tomasz Durakiewicz, National Science Foundation condensed matter physics program director. “Their probing of generally accepted tenets about magnetism has led to a new understanding of an old paradigm. This research has the potential to catapult sustainable, energy-efficient materials in a very wide range of applications.”
OK now lets go to clothing - new materials for the digital environment - a short article about a Google project.
Levi’s Is The First Official Partner For Google ATAP’s Jacquard Connected Fabric
Google was showing off its Project Jacquard at I/O 2015, a connected fabric tech that lets you build connected surfaces right into your clothes, in a way that makes it easy to connect to devices and power, while letting clothes makers make stuff that actually looks good. It seems promising, especially because they’ve already signed on Levi’s as a first partner.
Levi’s, the SF-based maker of jeans and various other clothing, came on stage at the I/O ATAP special presentation today, and discussed why it decided to jump into this new tech. Basically, they were looking to make it easier to integrate our device use into our daily lives, making it easier to access and less generally obtrusive.
The company is looking to build its own apps, but is also seeking contributions from the developer community, which it called “fashion designers” now with the advent of this new tech.
On the energy front - solar is making significant progress.
Solar Shines as Sellers Sometimes Pay Buyers to Use Power
Solar will be the fastest growing source of generating capacity over the next year.
Solar power capacity in the U.S. has jumped 20-fold since 2008 as companies including Apple Inc. use it to reduce their carbon footprints. Rooftop panels are sprouting on homes from suburban New York to Phoenix, driven by suppliers such as SolarCity Corp. and NRG Energy Inc.
Giant farms of photovoltaic panels, including Warren Buffett’s Topaz array in California, are changing power flows in the electrical grid, challenging hydro and conventional generators and creating negative prices on sunny days. The surge comes after shale drilling opened new supplies of natural gas, contributing to the 47 percent drop in oil since June.
“Solar is the new shale,” Michael Blaha, principal analyst of North American power at Wood Mackenzie Ltd. in Houston, said April 8. “Shale has lowered cost and enabled lower natural gas prices. Solar will lower costs for electricity.”
Solar capacity surged 30 percent in 2014 to more than 20 gigawatts and will more than double by the end of 2016, according to the Washington-based Solar Energy Industries Association. That’s enough to power 7.6 million U.S. homes, up from 360,000 in 2009. The biggest gains will be in California, Arizona, Texas, Georgia, New York and New Jersey.
Here’s a new advance that will accelerate solar use in developing areas.
Bringing microgrids to rural villages
An estimated 1.3 billion people around the world lack access to electricity, and as a result spend scarce resources on kerosene and other fuels for lighting. Now MIT researchers have developed a system to enable those in rural villages who can afford solar panels to share power with their neighbors, providing both income for the owners and much-needed power for the neighbors.
The key to the system, developed over two years of research and numerous trips to India, lies in a simple device the team developed that is smaller than a shoebox. The power management unit (PMU) performs a variety of tasks, regulating how electricity from solar panels or other sources gets directed to immediate uses—such as powering lights and cellphones—or to batteries for later use. At the same time, the PMU monitors how much power is going to each user, providing a record that can be used for billing without a need for individual meters.
MIT doctoral students Wardah Inam and Daniel Strawser, under the guidance of electrical engineering professors Rajeev Ram and David Perreault, will head to India next week, along with several other team members, to spend the summer doing field tests of the system. Along the way, they will stop off in Seoul, South Korea, to present an account of their work at the International Conference on Power Electronics.
Here’s another movement in the domain of transforming the world’s energy paradigm.
Tesla Motors Co-Founder Wants To Electrify Commercial Trucks
Twelve years ago, Ian Wright and some fellow engineers launched Tesla Motors, a Silicon Valley company that has helped jumpstart the market for electric cars.
Now, the Tesla co-founder wants to electrify noisy, gas-guzzling trucks that deliver packages, haul garbage and make frequent stops on city streets.
His latest venture, Wrightspeed, doesn't make the whole truck. Rather it sells electric powertrains that can be installed on medium-and heavy-duty commercial vehicles, making them cleaner, quieter and more energy-efficient.
"We save a lot on fuel. We save a lot on maintenance, and we make the emissions compliance much easier," said Wright, a New Zealand-born engineer who left Tesla when it was still a small startup in 2005.
Wrightspeed is one of a growing number of companies that are trying to transform the market for commercial trucks that consume billions of gallons of fuel while spewing tons of carbon dioxide, nitrogen oxide and other pollutants.
While more consumers are switching to electric cars like the Nissan Leaf, Chevy Volt or Tesla Model S, convincing commercial fleet owners to replace their diesel trucks won't be easy.
From renewable energy to alternative food production - here’s another article and the rapid growth of urban farming. This is a longish article but provides some great comparisons between traditional farming (even with 21st century tech) and the emerging capacity of urban farming.
Farming in the Sky
Why agriculture may someday take place in towers, not fields
The future of farming is looking up—literally, and in more ways than one: There are grow towers, rooftops, and industry talk of Waterworld-style “plant factories” in futuristic floating cities. And this vertical movement is happening for a variety of reasons. For one, by prioritizing localized operations, it offers a remedy to the mounting economic difficulties that independent farmers face when otherwise so easily underpriced by Big Ag. But more importantly, it’s rising out of environmental concerns—space, soil health, climate change, vital ecosystems decimated by monoculture. According to the professor of environmental health sciences Dickson Despommier in his article “The Vertical Farm: Reducing the Impact of Agriculture on Ecosystem Function and Services,” we should expect over the next 50 years for the human population to reach 8.6 billion, requiring an additional growing area “roughly the size of Brazil.”
When I returned home to Wyoming from the cotton fields of Lubbock, I happened upon Nate Storey’s farming operation, Bright Agrotech, in the outskirts of Laramie—a city in the high plains, 7,200 feet above sea level, with long, frigid winters often extending through May, frequently sustaining temperatures (way) below zero. The average growing season can be as short as 51 days. To battle the cold, Bright Agrotech operates in a 2,000-square-foot greenhouse, offering community-supported agriculture (CSA) by producing veggies (herbs and greens year-round, squash and root crops in the summer) for community shareholders. The greenhouse shelters 300 of Bright Agrotech’s patented ZipGrow towers (which they also sell for residential and commercial use), each reaching up to five feet; the company also custom-makes towers up to 17 feet.
In effect, the crops grow upward, maximizing the limited space within the climate-controlled walls of the greenhouse. The crops are fertilized and irrigated by deep-blue tanks of living tilapia, swimming around just out of sight. The fish tanks are rigged into part of a system that uses principles of hydroponics and aquaculture: one, the practice of using mineral-nutrient solutions in water for soil-less growing, and the other, the practice of using aquatic-life byproduct to fertilize. The waste of the tilapia is broken down, absorbed by the plants for food, and then the water is recirculated through the crops. The result? Bright Agrotech uses only 60 gallons of water a day, or about 22,000 a year (which, if you compare to water use in the average American household—400 gallons a day for a family of four—isn’t bad.) Plus a conventionally grown plot of that size would require 20 times that amount annually, according to Storey, and traditional commercial ag loses half of its water to evaporation, run-off, and flood irrigation.
Here’s some very good news on the antibiotic front - as well as progressing the domestication of DNA.
Researchers engineer E. coli to produce new forms of popular antibiotic
Like a dairy farmer tending to a herd of cows to produce milk, researchers are tending to colonies of the bacteria Escherichia coli (E. coli) to produce new forms of antibiotics—including three that show promise in fighting drug-resistant bacteria.
The research, which will be published May 29 in the journal Science Advances, was led by Blaine A. Pfeifer, an associate professor of chemical and biological engineering in the University at Buffalo School of Engineering and Applied Sciences. His team included first author Guojian Zhang, Yi Li and Lei Fang, all in the Department of Chemical and Biological Engineering.
For more than a decade, Pfeifer has been studying how to engineer E. coli to generate new varieties of erythromycin, a popular antibiotic. In the new study, he and colleagues report that they have done this successfully, harnessing E. coli to synthesize dozens of new forms of the drug that have a slightly different structure from existing versions.
Three of these new varieties of erythromycin successfully killed bacteria of the species Bacillus subtilis that were resistant to the original form of erythromycin used clinically.
'We're focused on trying to come up with new antibiotics that can overcome antibiotic resistance, and we see this as an important step forward,' said Pfeifer, Ph.D.
'We have not only created new analogs of erythromycin, but also developed a platform for using E. coli to produce the drug,' he said. 'This opens the door for additional engineering possibilities in the future; it could lead to even more new forms of the drug.'
So let’s look a bit at the past just to see the speed of the present.
How the compact disc lost its shine
It’s 30 years since Dire Straits’ Brothers in Arms began the CD boom. How did the revolution in music formats come about and what killed it
Thirty years ago this month, Dire Straits released their fifth album, Brothers in Arms. En route to becoming one of the best-selling albums of all time, it revolutionised the music industry. For the first time, an album sold more on compact disc than on vinyl and passed the 1m mark. Three years after the first silver discs had appeared in record shops, Brothers in Arms was the symbolic milestone that marked the true beginning of the CD era.
“Brothers in Arms was the first flag in the ground that made the industry and the wider public aware of the CD’s potential,” says the BPI’s Gennaro Castaldo, who began a long career in retail that year. “It was clear this was a format whose time had come.”
As Greg Milner writes in his book Perfecting Sound Forever, the compact disc became “the fastest-growing home entertainment product in history”. CD sales overtook vinyl in 1988 and cassettes in 1991. The 12cm optical disc became the biggest money-spinner the music industry had ever seen, or will ever be likely to see. “In the mid-90s, retailers and labels felt indestructible,” says Rob Campkin, who worked for HMV between 1988 and 2004. “It felt like this was going to last for ever.”
It didn’t, of course. After more than a decade of decline, worldwide CD income was finally surpassed by digital music revenues last year. With hindsight, it’s clear that technological changes had made that inevitable, but almost nobody had foreseen it, because the CD was just too successful. It was so popular and so profitable that the music industry couldn’t imagine life without it. Until it had to.
Speaking about the near past that seems so far away - Here’s something that was a huge worry in the 60s and 70s. This is worth the read. There’s a 13 min video that is also worth the watch.
The Unrealized Horrors of Population Explosion
The second half of the 1960s was a boom time for nightmarish visions of what lay ahead for humankind. In 1966, for example, a writer named Harry Harrison came out with a science fiction novel titled “Make Room! Make Room!” Sketching a dystopian world in which too many people scrambled for too few resources, the book became the basis for a 1973 film about a hellish future, “Soylent Green.” In 1969, the pop duo Zager and Evans reached the top of the charts with a number called “In the Year 2525,” which postulated that humans were on a clear path to doom.
No one was more influential — or more terrifying, some would say — than Paul R. Ehrlich, a Stanford University biologist. His 1968 book, “The Population Bomb,” sold in the millions with a jeremiad that humankind stood on the brink of apocalypse because there were simply too many of us. Dr. Ehrlich’s opening statement was the verbal equivalent of a punch to the gut: “The battle to feed all of humanity is over.” He later went on to forecast that hundreds of millions would starve to death in the 1970s, that 65 million of them would be Americans, that crowded India was essentially doomed, that odds were fair “England will not exist in the year 2000.” Dr. Ehrlich was so sure of himself that he warned in 1970 that “sometime in the next 15 years, the end will come.” By “the end,” he meant “an utter breakdown of the capacity of the planet to support humanity.”
As you may have noticed, England is still with us. So is India. Hundreds of millions did not die of starvation in the ’70s. Humanity has managed to hang on, even though the planet’s population now exceeds seven billion, double what it was when “The Population Bomb” became a best-seller and its author a frequent guest of Johnny Carson’s on “The Tonight Show.” How the apocalyptic predictions fell as flat as ancient theories about the shape of the Earth is the focus of this installment of Retro Report, a series of video documentaries examining significant news stories of the past and their aftermath.
For Fun
Here’s where food and algorithms combine. :)
Food formulas: the science behind your favourite meals
Scientists have cracked the ratio for the perfect strawberries and cream. But do you know the perfect proportions for a Yorkshire pudding or G&T?
Eating strawberries and cream is a quintessential part of English summer - along with stuffing yourself with fish and chips on the beach until you feel sick, and insisting on having a barbecue when it's actually still cold enough for jumpers.
Now scientists have cracked the formula for making the perfect bowl. According to Dr Stuart Fairmond, the ideal strawberries-to-cream weight ratio is 70:30 - or one tablespoon of single cream per two fresh medium-sized strawberries - and the treat must be devoured in two minutes and 50 seconds for optimum taste (never a problem in my household).
If you like nothing better than geeking out over your G&T or getting technical with your toast, here are some other food formulas.
This is a great inspiring article.I am pretty much pleased with your good work.You put really very helpful information. Keep it up. Keep blogging. Looking to reading your next post.
ReplyDeleteMagnetic Science