Are bottom-up sustainability initiatives filling gap left by Rio+20? | The Guardian

Small, grassroots sustainability efforts have proliferated since Rio+20, but a status report signals some challenges and limitations

A woman carries freshly formed earthen lamps for drying outside her home in Mumbai, India.

In the absence of a meaningful global agreement at the Rio+20 Earth summit last summer, expectations collapsed – but didn’t disappear. They shifted from big to small, from global to local.

In response, sustainability leaders recalibrated their attention to the potential for local and regional initiatives to fill this leadership vacuum. After all, while climate change remains an immutably global problem, many of the most immediate environmental and developmental challenges – toxic pollution, education and public health and water and land degradation, to name a few – must be dealt with at a local or regional level.

A year or so on, the question remains: is this bottom-up, smaller-scale approach working? With more than 1,000 sustainability leaders from businesses, NGOs and their allies set to converge in New York next month for a leaders’ summit of the UN’s Global Compact – the world’s largest corporate sustainability membership organisation – it’s a good time to take stock.

It’s premature to come to any sweeping conclusions, to be sure. Too little time has passed to expect even the best executed of the blizzard of plans to have achieved major gains. Yet a recent special report, surveying the portfolio of UN bottoms-up initiatives, makes for decidedly mixed reading.

Grand scale

On one hand, the tally of efforts – totalling 1,382 – offers impressive quantitative evidence of a wide and deep groundswell of initiatives across the globe. On the other hand, the survey shows a worrying fuzziness and lack of progress in key areas.

Though still faint, these signals beg critical questions. Is it possible to distribute a clear sense of direction across thousands of initiatives? Has progress been stymied by the lack an over-arching development goal?

For a sense of the scope and scale of the proliferation of initiatives following Rio+20, flip through the July 2013 special report, Sustainable Development in Action, prepared by the United Nations’ sustainable development division. Synthesizing listings across a handful of databases, the report surveys the accretion of voluntary initiatives that support the goals of Rio+20.

The sheer scale of efforts is encouraging. Out of the 1,382 efforts, more than 700 commitments were announced as part of Rio+20. UN programmes have seeded hundreds more in recent years, including Sustainable Energy for All, United Nations Global Compact, Every Woman Every Child and others.

A closer look

Read a bit more closely, though, and fractures appear. Take, for example, one of the eight high-priority areas examined by the special report, higher education sustainability initiatives (HESI), While 272 organisations in 47 countries had made commitments related to higher-education sustainability as of June, a lack of centralised coordination is impeding progress (emphasis added):

Since HESI was established as an ad hoc initiative by several UN organizations and external stakeholders, this action network relies on a more informal organizational structure, typical of a network of stakeholders rather than a top-bottom organization. Therefore, although the organizational capacity to implement the network goals is rather limited and thus a bit challenging, it can also be seen as strength. (p. 20)

Improved energy efficiency and renewables are widely supported. Yet under the “Sustainable energy for all” category, the special report finds that scaling remains a barrier:

Ensuring adequate support to facilitate action across the many partner countries and thematically driven high impact opportunities, while at the same time collaborating with thousands of stakeholders over the course of the next 17 years, remain a key challenge for the initiative. Without proper follow-up and engagement, many commitment makers may not be fully engaged, which means that they would be left with little direction on how to contribute to the initiative in a tangible way. (p. 26)

To be fair, the problems highlighted in the special report extend far beyond the business-focused mandate of the Global Compact. However, the fuzzy goals and lack of guidance that UN programs face mirror some of the obstacles dogging private sustainability efforts as well.

The sheer number of different sustainability initiatives is a rising source of confusion for individuals, companies, consortia and even entire sectors. Even the most pro-sustainability business leaders can find themselves overwhelmed by well-intentioned, often overlapping agendas, collaborations and standards. The Global Initiative for Sustainability Ratings has uncovered more than 1,500 sustainability indicators spanning almost 600 issues.

And small, uncoordinated efforts come with added costs and missed opportunities. In a recent conversation with GSB’s Jo Confino, Mars Inc. CSO Barry Parkin estimated that 125 cocoa sustainability programs exist, each with high startup costs and affecting hundreds – at most, thousands – of farmers. “If we were to align behind programmes, it would be much more efficient,” he said, “and we could really scale up our impacts.”

Signs of success

That said, there are certainly some promising signs, too. Cities of varying sizes, with varying resources, are proving that a common set of interests, abilities and incentives can yield real gains.

Some 4,700 projects have been registered with C40 Cities, which links the mitigation efforts of scores of megacities together with many smaller innovative burgs. The group is on track to cut more than 1bn tonnes of emissions by 2030. In another city-focused program, the carbonn Climate Cities Registry, 302 cities from 42 countries, together responsible for some 1.5 gigatonnes of carbon-dioxide annually, have filed more than 3,600 commitments, inventories or mitigation plans.

What’s behind these metro-successful stories? Municipal climate efforts can yield self-sustaining benefits, such as boosting energy savings, health and the economy, according to a 2013 survey of 110 cities conducted by the Carbon Disclosure Project (CDP), C40 and the sustainability-engineering firm AECOM.

The gains are especially encouraging given that cities are the battleground for a growing share of the climate challenge. Urban centres account for a bit more than half of the world’s population today, but generate 75% of the globe’s greenhouse-gas emissions – and they’re expected to grow to contain 70% of the population by 2050.

What’s next?

Whether they take the form of rules requiring big buildings to track and publicize their resource use or water-savings standards on plumbing fixtures, successful city-scale efforts share some attributes: they benefit stakeholders, they’re clearly defined – with timelines – and are pushed from the top down.

When sustainability leaders meet in New York next month, they should realize they have a rare opportunity to give helpful structure and greater urgency to business’s role in the post-2015 world. Without clearer marching orders and deeper institutional support, though, the risk is that the post Rio+20 bottom-up approach will simply bottom out.

~

Check out the original story here:
http://www.theguardian.com/sustainable-business/post-2015-bottom-up-approach

Axion International transforms plastic junk into rugged construction materials | Corporate Knights

Recycling mountains of plastic for smoother commutes, sturdier bridges and a cleaner environment.

These railroad ties near Miami are made from 100 per cent recycled plastic

Every day, thousands of commuters on Miami’s rapid transit system are whisked to work cushioned by a bed of empty milk jugs, discarded laundry detergent jugs and other household castoffs.

The plastic in question isn’t the familiar debris that accumulates in rail tracks, along roadways and on the sidewalk. Rather, the trains’ journeys are smoothed by super-rugged railroad ties made up of veritable mountains of plastic waste recycled from consumers’ trash.

At a quick glance, the dark plastic ties are tricky to distinguish from the heavy wooden beams they replace. Their performance is vastly different, however.

Where conventional wooden ties degrade, sometimes in just a few years, the recycled plastic composite ties do not.

In fact, the material “is basically impervious,” says Steve Silverman, president and chief executive of New Jersey-based Axion International Holdings, which supplied its Ecotrax composite ties to Miami-Dade Transit.

“It doesn’t rot. It doesn’t corrode. It doesn’t absorb water. Bugs don’t eat it,” he adds.

Silverman estimates the longevity of the plastic ties to be at least 40 years, compared with a few years for wooden ties in harsh environments. What’s more, it needs no maintenance, such as painting or re-sealing.

On a run of heavily used Long Island Rail Road commuter rails, a batch of Axion ties in service over the past eight years showed no signs of material degradation, according to a recent independent lab study.

If anything, the ties’ performance had improved slightly. As the plastic weathered, it hardened slightly, tightening its grip on the spikes, screws and hardware that attach the rails to the ties.

More than 150,000 of Axion’s ties can be found in rail beds on six continents. The company is also pushing into the construction industry, where its composite I-beams, planks and structural members are being used to rebuild bridges formerly made of wood, concrete or steel.

If Axion’s approach takes off, the environment may prove to be the biggest winner.

Using recycled composites in these heavy-duty applications has the potential to usefully absorb enormous flows of plastic waste. Today, just 8 per cent of plastic is recaptured, according to the U.S. Environmental Protection Agency. More troublesome still, a significant share of waste plastic is lost to the environment.

Because plastic waste does not degrade it does cumulative harm to the environment, whether in your backyard or floating far out in a Pacific Ocean gyre.

On land and on water, loose plastic waste often ensnares wildlife. It has also begun to penetrate natural food chains. As it breaks up into microscopic bits, plastic debris is consumed by tiny creatures, which are in turn eaten by bigger fish or birds. At each step in the food chain, traces of plastic and related additives accumulate, explains Susan Freinkel in her 2011 book, Plastic: A Toxic Love Story.

Humans are tainted by plastic, too. Blood tests reveal widespread exposure to synthetic chemicals used in plastics circulating in our bodies.

Axion’s Miami project offers a glimpse at the impact that recycled composites could have in diverting the growing tide of plastic.

So far, Miami has purchased around 2,000 composite ties, made up of roughly one million pounds of recycled plastic. By comparison, every year, the U.S. rail system replaces some 20 million ties.

As a thought experiment, if all those replacement ties were made of recycled plastics, the effort could usefully sequester some 10 billion pounds of waste, more than twice the volume of all the plastic recycled in the U.S. in 2010. Besides being stable and enormously strong, the composite ties can also be recycled at the end of their life.

For now, Axion is focusing on rail and construction markets where the relatively high upfront costs of its composites pencil out by avoiding more frequent future replacements. In the U.S. and overseas this dictates a focus on regions similar to Florida, where it’s hot, wet and salty, and insects are rife – conditions where wood products are short lived.

Longer term, Silverman sees bigger opportunities using plastic garbage to help remake America’s crumbling infrastructure. A 2011 Federal Highway Administration report estimates that more than 143,000 bridges are either structurally deficient or functionally obsolete, largely due to the sort of corrosion, wear and tear to which recycled structural composites are immune.

But first the company has to drive down the cost of its raw material. Though the world is awash in plastic, too little of it is recycled.

“If the U.S. recycled more, our prices would come down,” says Silverman.

That could be a triple win: for Axion, the country’s infrastructure and the environment.

__

Check out the original story at: http://www.corporateknights.com/article/tech-savvy-axion-international

Ford Motor Company looks high and low to save water | Corporate Knights

In the race to go green, it’s fair to say that Ford has looked high and low – literally – to help its automotive plants cut their impact on the environment.

One of Ford’s highest profile eco-efforts can best be seen by looking down on the roof of its River Rouge factory in Dearborn, Michigan. Originally constructed starting in 1917 by Henry Ford, the complex debuted as an industrial pioneer, among the first fully-integrated industrial complexes, where steel mills, glass works and chemical plants were built side by side to speed the flow of raw materials into Ford’s burgeoning Model T plants.

It was there in 2000 that company chairman William Clay “Bill” Ford Jr., the founder’s great-grandson, unveiled another pioneering effort, of a greener hue. He announced plans to build the largest “living” roof ever installed on an industrial building, comprising 10 acres of hearty, green sedum plants.

Green roofs have since become a favourite of building designers. But at the time, Ford’s plan, part of a broader $2-billion site renovation, ran counter to energy-guzzling conventions in the auto biz. U.S. auto sales hit an all-time high that same year, buoyed by record sales of high-margin SUVs and sub-$2 per gallon gas.

Against this backdrop, and even though the roof was estimated to cost about the same as a conventional design, critics carped that Ford was risking money on greenwashing efforts. Yet when the roof was completed in 2002, Bill Ford stood firm. “This is not environmental philanthropy,” he said at the time. “It is sound business.”

Since then, much has changed. Gas prices have nearly doubled, endangering SUVs, and Ford’s green roof gamble continues to pay back by passively lowering the factory’s energy use for cooling, displacing electric illumination with skylights and reducing costs to filter stormwater runoff.

Elsewhere throughout Ford’s global operations, eco-roof features pioneered at River Rouge – such as day-lighting, rain water capture and cool-white materials that reflect sunlight – have become standard design features. Though the most visible, the River Rouge roof wasn’t the only water-focused effort Ford rolled out in 2002. That same year, the company began a long process to radically reduce the amount of water, energy and other resources used in its manufacturing operations.

From the start, metal-cutting machines were a top target. These computer-controlled devices shape hunks of steel and aluminum into precision auto parts, everything from big engine blocks to fine-toothed gears.

The problem? “It can be a messy process,” explains Sue Rokosz, principal environmental engineer at Ford.

Flood machining, as the conventional process is known, uses a steady stream of oil and water to cool cutting tools. This slurps up huge inflows of fresh water, requiring a lot of energy and plumbing infrastructure to keep flowing. And at the back end, it yields a slurry of oil, water and metal particles that are costly to dispose of and difficult to recycle.

As a fix, Ford turned to a process known as near-dry machining, or minimum quantity lubrication (MQL). The process replaces the stream of oily water with micro-spritzes of atomized oil delivered via articulated arms or hollow drill bits to precisely the point of contact where friction and heat build up.

It’s a small improvement that delivers outsized benefits. By making the switch, a typical manufacturing line – capable of machining roughly half a million parts every year – can lower annual water use by about 280,000 gallons and avoid the consumption of more than 28,000 gallons of lubricants.

What’s more, oily wastewater is all but eliminated and the metal shavings are relatively dry and clean, ensuring a higher share is recycled. Line workers benefit too, with drier, safer work areas, says Rokosz.

Though dry-machining systems cost slightly more upfront, their overall lifetime costs pencil out at 17 per cent less than old-style wet machines, according to Ford data.

While the technology has become Ford’s de facto standard, it can be set up only as fast as new manufacturing lines are built or old ones are replaced. So far, it’s been installed in more than a third of Ford’s 28 powertrain plants, with more on deck to make the switch.

Drop by drop, Ford’s water-savings efforts are adding up. According to its sustainability report, Ford has cut water consumption, per vehicle produced, by about half in the past decade. It is on track to cut per-vehicle consumption to around 900 gallons by 2015, compared with over 2,500 gallons in 2000. That’s roughly equivalent to taking 100 fewer five-minutes showers.

~

Check out the original story online, here:
http://www.corporateknights.com/article/tech-savvy-ford-motor-company

 

Could expensive oil rescue CCS? A talk with energy expert Michael Levi | Global CCS Institute

As oil prices continue to skirt all-time highs, there’s been a gusher of coverage about how oil producers are turning to ever more costly technologies—from going to ultra deep, to mining tar sands—to eek more oil from the earth. Against this backdrop, I wondered if the case for using CO2 for enhanced oil recovery (EOR) is gaining mind share, or maybe even market share?

To get a better understanding on the impact of sustained high oil prices I turned to Michael A. Levi, The David M. Rubenstein Senior Fellow for Energy and the Environment at the Council on Foreign Relations in New York City. A frequent author (his new book, The Power Surge, is due out this month) and a regular contributor to the CFR’s energy, security and climate blog, Levi is a prolific voice on energy issues, often quoted on the complex interplay between conventional energy, renewables, climate, and politics.

Levi first explored the linkages between high oil prices and EOR-CCS’ prospects last June, a time when oil prices were around 10 per cent lower than recent averages. In his post, Levi steps through a back-of-the-envelope assessment of the potential rewards of scaling up EOR-CCS.

In a 2010 study, Advanced Resources International estimated that a typical CO2-EOR project would require about one ton of CO2 for each 3.8 barrels of produced oil (assuming some recycling). Assuming CO2 available at $15/ton and an oil price of $112 they figured that a typical project could make a profit of about $30/bbl after returning 25% on capital.

Alas capturing and delivering CO2 from power plants costs a lot more than $15/ton. How much more? A lot depends on how much natural gas costs. A recent paper in Environmental Science & Technology uses a central estimate of $6.55/MMBtu and estimates that captured CO2 could be delivered at $73/ton. If prices are instead $5/MMBtu, which is a reasonable expectation in the United States, this would drop by about ten percent, to around $65/ton.

The authors also look at the question probabilistically. They find that there’s a 70 percent chance of being able to deliver CO2 for $100/ton or less. If you shift their natural gas price assumptions down a bit, it’s reasonable to drop this to about $90.

What would this mean for the economics of oil production? Estimated profits at $112/bbl oil would fall to about $18/bbl (part of the extra cost of CO2 would be offset by lower taxes). Once again, though, this is profit in excess of a 25 percent return on capital. Excess profits would be wiped out if oil prices fell to about $75.

Notably, the study to which Levi refers is focused on the cost of CO2 capture from natural gas processing plants—the largest industrial-scale sources of CO2 currently available. Levi’s calculation holds for proposed CCS-from-coal facilities, where planners are aiming at a similar target of delivering CO2 at less than US$100 per ton.

Back to oil prices, then. Given that crude has held steady at around US$100 per barrel in the past few years, Levi’s calculus makes CCS-EOR look like a pretty good proposition. Levi’s bottom line: “I wouldn’t count on high oil prices rescuing power plant CCS. But I wouldn’t write it off entirely either – and, even if there’s only limited deployment, the impact on technological progress could be large”.

In short, the longer the price of crude remains high, and the higher it goes the stronger the case for EOR-CCS. While it may be perilous to speculate on oil prices, the balance of indicators point towards high prices over the long term. Energy-hungry emerging markets such as China and India increasingly drive long-term demand. A recent OECD report speculated that prices could rise as high as US$270 a barrel by 2020, due largely to demand growth in emerging markets.

To keep output rising, companies are already digging deeper—literally and financially—to lift each new barrel of oil. Exxon, for example, will spend a record US$41 billion in 2013 to buoy its long-term output of oil and gas, which it expects to fall by one per cent this year. As oil companies reach for more tools to eek out every last molecule of petrol, especially from wells they already control, it seems that the case for EOR-CCS is only improving.

I caught up with Levi in March to get his take. Here’s an edited version of our conversation.

In the years since the financial crisis hit, oil prices have remained stubbornly high, despite slow growth in much of the developed world. Do sustained high prices reinforce your take on the prospects for oil’s growing role in the future of CCS?

I’ll leave it to others to make predictions on future oil prices. But it is clear that high oil prices make it more attractive to use CCS in EOR. The higher oil prices go, and the longer they remain high, the more incentive there is to invest in CCS EOR.

Short-term variances in oil prices are fairly immaterial. What matters most is that prices have been sustained. This gives people more confidence that prices will remain high over a longer spread, over a longer period of time.

No one invests for the long term based on today’s prices, especially not oil companies, which plan on multi-decade time scales. Power companies also think on very long time scales. Both are capital-heavy industries—familiar with assessing risk, pricing and financing big projects. The difference is that oil companies are more likely to be comfortable taking risks.

Given anemic US growth, why have oil prices remained near their all time highs, when adjusted for inflation?

With the exception of the immediate aftermath of the financial crisis, when oil fell sharply, prices have been historically high. Prices also returned to high levels very soon after the global financial crisis.

When it comes to prices, the slow growth in the United States, following the recession, doesn’t matter in so far as we’re part of the world economy, taking a world price on oil. There’s a lot of growth in demand happening elsewhere, particularly in developing economies like China.

At the same time, even though there’s been a lot made of rising US oil output, in the global market the new sources add up to only modest supply growth. The net result is relatively high, sustained prices. Rising US oil output can help restrain prices at the margin, but it’s unlikely to crash prices on a sustained basis.

That’s partly because marginal North American oil production is fairly expensive. Whether it’s fracked oil in the Dakotas, or oil sands in Canada, these unconventional new sources are relatively costly to exploit, so require fairly high prices to be viable.

Some environmentalists have objected to the idea of CCS-EOR, maintaining that it’s perverse to pump anthropogenic CO2 into the ground to lift out fossil CO2 in the form of oil. For example Joe Romm—a former US DOE official and a leading voice on climate policy via Climate Progress—has argued that CCS-EOR will lead to more net CO2 emissions. Here he is, writing in 2007:

Capturing CO2 and injecting it into a well to squeeze more oil out of the ground is not real carbon sequestration. Why? When the recovered oil is burned, it releases at least as much CO2 as was stored (and possibly much more). Therefore, CO2 used for such enhanced oil recovery (EOR) does not reduce net carbon emissions and should not be sold to the public as a carbon offset… In short, the CO2 used to recover the oil is less than the CO2 released from that oil when you include the CO2 released from 1) burning all the refined products and 2) the refining process itself.

How do you see this issue on the net GHG impact of EOR-CCS?

Focusing only on each CO2 ton in the near term is short sighted. There are two things worth keeping in mind. The first is that at the margin, US oil production tends to primarily displace other oil production rather than supplement it. So if lifting the US barrel, in that case, leaves even a little more CO2 in the ground than the alternative, then it’s a plus. That alone reduces the impact of this practice on net emissions of greenhouse gasses.

The other perhaps more important aspect is, in the short run, what you should be focused on when it comes to CCS and EOR is the opportunity to develop the technology. The goal is to bring down its cost, which will let you apply it on a much larger scale to other industries. If you don’t start somewhere, it’s very hard to get to the point where this technology is cost-effective.

So even if applying CCS to boost EOR doesn’t create a big carbon benefit in the short run, it’s a good bet to deliver a big payoff in the longer run. It’s perhaps the most economically viable path, to ready CCS for commercial use in the electric power sector around the world.

The point is that technologies need niches to scale up, and to bring down costs. If you only focus on technologies that can solve all our problems right now at low cost, it turns out that you don’t have any.

Don’t get me wrong. It’s obviously critical that we reduce net greenhouse gas emissions. But I’m more interested in being able to make huge reductions ten years from now than in the micro-level changes that might happen before this technology is scaled up.

We’ve touched on high priced oil already. The other bogeyman in global energy markets these days is cheap North American natural gas. In early March, a Canadian coal-to-syngas project that was slated to deploy advanced CCS was mothballed in part because of low natural gas prices. Does cheap natural gas alter the calculus on CCS-EOR in any way?

If we’re talking about CCS for synthetic liquid fuels, which you asked about, those require relatively high prices to be viable. Without massive over-investment in that space, I don’t see a stampede toward synfuels. So no, even if we see more synfuels, the shift will not crash the price of oil.

On that note, keep in mind that the one thing that might change the calculus of CCS-EOR is if oil prices crash. But it’s very difficult to crash the price of oil from the supply side, especially when it’s already this expensive, unless you massively overinvest in oil production—which is very capital intensive to do—or develop a very large-scale supply of alternatives.

As far as natural gas-fuelled cars and trucks go, my answer is the same. Yes, natural gas is being used to fuel a growing—but still small—share of fleet vehicles, and yes EVs will consume more natural gas indirectly, in the form of electricity. But will the penetration of natural gas into the US transport sector fundamentally change the economics of oil? I don’t see that in the next 10 years, at least. I think it’s hard to make predictions further beyond that.

With the failure of many publicly backed CCS projects around the globe, do you see EOR as a best bet to push CCS technology ahead?

I think that may well be right. With EOR-CCS, it may not be possible to make money at a large scale without new policy, but it may at least be possible to imagine that one can, and to come closer to cover the costs of scaling up the technology in the process. Conversely, it is impossible for anyone to imagine that they can make money taking the CO2 exhaust from a coal-fired power plant and burying it underground—unless there’s a policy incentive. It’s a pure additional cost.

Entrepreneurs who put money into EOR-CCS may be right, or they may be wrong, but at least a few may be willing to push ahead. In short, CCS-EOR provides short-term economic support for innovation. If you’re concerned about the long-term prospects of CCS, you should be thinking about EOR as the way to support innovation in the technology.

What are the key hurdles then to seeing EOR-CCS progress further, faster?

Will companies have the confidence to invest in this? Are there too many risks that are confusing? Is there too much uncertainty? Are there too many technological unknowns? I think those are bigger factors compared to whether oil prices might crash, or whether the global transport system might flip to natural gas.

There’s some evidence of progress out there. There’s interest in tweaking some tax credits that exist in order to support CCS-EOR. And in his State of the Union address, the president mentioned a US$25 million prize for the first combined cycle natural gas plant to implement CCS. It’ll be interesting to see whether that prize is defined to include projects that use the CO2 for EOR.

~

Check out the original post here: http://www.globalccsinstitute.com/insights/authors/adamaston/2013/03/26/could-expensive-oil-rescue-ccs-talk-energy-expert-michael-levi

Looking Ahead: CCS’ Prospects Under Ernest Moniz, Energy Secretary Nominee | Global CCS Institute

Ernest Moniz, President Obama’s newly nominated Energy Secretary, shares much with his predecessor, Steven Chu, outgoing head of the Department of Energy (DOE) and who is returning to an academic chair at Stanford University. Both men are prominent academic physicists, with long track records of advancing energy technology.

Chu proved to be a vocal advocate for clean energy technologies, especially in the realms of renewables and transportation, funneling billions in stimulus dollars into early stage R&D through DARPA-e and buoying mid-stage companies such as Tesla with federal loans. Under Chu’s watch, carbon capture and storage (CCS) remained a priority, with efforts to press ahead with FutureGen 2.0, but lacked the urgency that many stakeholders wanted to see.

Assuming a quick Senate approval—Moniz is widely regarded to face a relatively easy confirmation—so what’s in store for CCS under a Moniz-led DOE? On the downside, Moniz takes charge in a period of ever-tightening fiscal policy, so will all but certainly have less public money to deploy than did Chu.

If the conditions of the recent sequester hold, the budget for the DOE’s Fossil R&D program—under which FutureGen and other carbon capture programs are funded—will be cut by 5 per cent, or US$25 million.

On the upside, Moniz enters his new post with far more experience in rough-and-tumble Beltway tactics than did Chu. Moniz served in the second term of President Clinton’s cabinet, first as Under Secretary of Energy, and later as Associate Director for Science in the Office of Science and Technology. Moniz has frequently testified before Congress, as well.

In terms of CCS, if past is precedent, there’s reason to be hopeful, maybe even a little optimistic.

I spent some time conducting some research to map out Moniz’ work and statements on CCS. Here’s what I found. If you have other examples, please comment and add more in the comments.

As Tamar Hallerman notes at GHG Monitor, Moniz has co-authored several high-profile works on energy technology and policy in which carbon is a central issue. In The Future of Coal (2007, MIT Energy Initiative) CCS is addressed front and center, vital to extending coal’s tenure in an environmentally tolerable way. The report formally recommends both a carbon price and that the Energy Dept. alter practices in its Fossil R&D regime to accelerate the development of CCS. Retrofitting of Coal-Fired Power Plants for CO2 Emissions Reductions (2009, MIT Energy Initiative) offers far more detail on these issues. In the Summary for Policy Makers section which Moniz co-authored, he makes a detailed case for increased federal emphasis on CCS. A few quotes (emphasis added):

“The US Government must move expeditiously to large-scale, properly instrumented, sustained demonstration of CO2 sequestration, with the goal of providing a stable regulatory framework for commercial operation.”

“Real world” retrofit decisions will be taken only after evaluation of numerous site-specific factors.

CO2 capture cost reduction is important.

A robust US post-combustion capture/oxy-combustion/ultra-supercritical plant R&D effort requires about US$1 [billion per] year for the next decade.

The Federal Government should dramatically expand the scale and scope for utility-scale commercial viability demonstration of advanced coal conversion plants with CO2 capture.

The program should specifically include demonstration of retrofit and rebuild options for existing coal power plants. New government management approaches with greater flexibility and new government funding approaches with greater certainty are a prerequisite for an effective program.

Time is of the essence.

About a year ago, Moniz sat down with The Energy Switch Project to document his views across the full range of conventional and renewable energies, and related technologies. I’ve pasted below two out-takes, where he comments on coal, CCS and carbon pricing.

In the video above, Moniz makes the following statements (abridged transcript):“Coal of course is a very widely used fuel, particularly for the power sector, with the US China and India combined using about 60 per cent of the world’s coal. So if we’re going forward particularly with carbon control in the future, we simply have to figure out a way to employ coal.

The answer has to be then for a serious solution: the ability to capture CO2 and sequester it underground. The problem right now is cost. Today we would probably be adding six, seven, eight cents per kilowatt-hour to electricity produced by coal… For a brand new coal plant, we’re probably talking that’s on top of six to seven cents. So let’s call it a doubling of the cost at the plant of the production of electricity.

We might or might not be willing to pay that in the United States, but it is very difficult to understand China and India being willing to pay this kind of a premium.

Do I believe today we can start safely injecting billions of tons into an appropriate reservoir? Absolutely. That’s a different statement however to do with 30, 40, or 50 years, however, and I think those things will work out as we do it.

The other near-term issue is that we really have very little idea as to how to regulate, how to assign liability [for CCS]. The EPA is in fact working on this, but certainly it cannot be based on the old types of regulatory structures put in place for water injection.”

From the same interview, he also comments on carbon pricing, saying:

“Certainly it will never be cheaper to capture and store CO2 than it is to release it into the atmosphere so the reason we’re doing it in fact is because carbon will have a price and ultimately it has to be cheaper to capture and store it than to release it and pay a price.

If we start really squeezing down on carbon dioxide over the next two decades, that [price] could double, it could eventually triple.

I think inevitably if we squeeze down on carbon, we squeeze up on the cost, it brings along with it a push towards efficiency, it brings along with a push towards clean technologies in a conventional pollution sense. It brings along with it a push towards security. After all, the security issues revolve around carbon-bearing fuels.

Now, I think it is very important that any funds associated with that be recycled efficiently to productive uses and to address distributional questions because some of the poor may bet hit harder. There’s a lot of work to do, but in the end, if you take one simple thing, that’s the direction I think we need to go in.”

You can check out a continuous stream of Moniz’ full 22-minute interview on Vimeo, or pick and choose Moniz’ comments on a single topic, in short 1-2 minute segments, the interview is conveniently split into shorts by topic.

~

Check out the original post here:

http://www.globalccsinstitute.com/insights/authors/adamaston/2013/03/19/ccs%E2%80%99-prospects-under-energy-secretary-nominee-ernest-moniz

A Different Kind of Hybrid: USPS Bets on Hydraulics | Corporate Knights

Package delivery giant UPS gives its fleet the hybrid treatment, minus the expensive batteries

To help its iconic brown delivery vans go much further on a gallon of fuel, United Parcel Service is rolling out a new type of hybrid vehicle that’s propelled by hydraulic pressure instead of electric batteries.

The technology is a relative of the hybrid electric vehicle (HEV) pioneered by Toyota’s Prius, which achieves enviable mileage by recapturing much of the energy lost during braking. Instead of saving that braking energy in batteries, UPS’s new hydraulic hybrid vehicle (HHV) delivers a 35 per cent boost to mileage by storing hydraulic fluids in super strong tanks.

“The hydraulics are the muscle, managed by very sophisticated electronics,” says Mike Britt, director of maintenance and engineering for the company’s international ground fleet.

Hydraulic systems may be new to delivery trucks, but they’re widely used elsewhere. The strength and durability of hydraulic systems have made them a mainstay in countless heavy-duty machines, from fighter planes and garbage trucks to bulldozers and car crushers. But until now, high costs have made it difficult to use hydraulic drives in everyday vehicles.

As part of a long-term government-backed program to study and scale up this technology, UPS began at the end of 2012 to introduce 40 of the advanced Daimler-built hybrids on delivery routes in Atlanta, Georgia, the shipping giant’s hometown, and Baltimore, Maryland.

From the outside, UPS’s hybrids are the same familiar brown boxes-on-wheels that have delivered catalogue orders and holiday gifts for generations. Pop open the hood, however, and you’ll begin to see differences. Inside is a powerful diesel engine, but instead of connecting to a drive axle and transmission, as in a regular truck, the motor drives an advanced pump that pressurizes a tank of hydraulic fluid.

Upon acceleration, digital controllers send bursts of highly pressurized fluid via narrow pipes to pump motors, which set the wheels spinning. The system works in reverse during braking. The pumps act as generators, recapturing more than 70 per cent of the vehicle’s kinetic energy. At idle, the engine doesn’t run. Rather, it switches on and off intermittently to top up hydraulic pressure.

The design’s main attraction is that it consumes less energy. Using the diesel engine to generate hydraulic pressure, rather than propel the van, allows the motor to run at a fixed, optimal speed.

What’s more, the regenerative braking process is about 50 per cent more effective at recapturing energy compared with a Prius-style hybrid electric vehicle, Britt adds.

There are also secondary savings in the form of less wear and tear. Compared with conventional designs, UPS anticipates the brakes will last four or five times longer.

Likewise, running the engine at its “sweet spot” should extend its lifespan two- or three-fold compared with a diesel engine used conventionally, Britt says.

Given that a typical UPS brown delivery van has an average lifespan of up to 25 years, less day-to-day downtime means many more deliveries and lower lifetime operating costs.

Performance improves, too. Drivers like that the system delivers enormous torque – or pushing power – immediately. That’s an advantage when moving a 27,000-pound van up to speed, then back to a stop, scores of times every day. “Hydraulic power is really well suited to stop-and-go delivery routes,” says Britt.

The design is the result of a project that started in 2006, backed by the U.S. Department of Energy’s Clean Cities program.

In the six years since, the department has orchestrated the development of a series of pilot vehicles in collaboration with three vehicle manufacturers (Eaton, Parker Hannifin and FCCC) and three major shippers (FedEx, Purolator and UPS).

With a fleet of 93,000 delivery vehicles – running the gamut from big rigs down to three wheelers – UPS has proven itself an eager early adopter of green vehicle technologies. The hydraulic hybrids join a fleet of 2,500-plus “unconventional” vehicles, which includes HEVs, compressed natural gas (CNG), clean diesel and pure electric vehicles (EVs). Taken together, this fleet has motored more than 200 million miles since 2000.

The hydraulic hybrids are hitting U.S. roads at around $120,000 per vehicle, Britt estimates. That’s roughly twice as much as a standard diesel version. To help validate the long-term cost advantages, and to amass data on the real-world performance of the technology, the Environmental Protection Agency subsidized about a third of UPS’s total costs.

Britt believes there’s room for costs to fall and energy savings to rise. “If we do this right, we can set a standard for the whole industry.”

~

See the original story here: http://corporateknights.com/article/tech-savvy-united-postal-service

Can smarter storage solve our energy woes? | Ensia

Notrees Windpower Project battery storage unit

A new generation of technology focuses on supplying a midsize dollop of power exactly when and where it’s needed most.

Largely out of sight, tucked into building basements and stashed in garages, a new generation of energy storage technology is poised to help our aging grid not only avoid outages, but enable vast new flows of renewable power, all while saving some serious money. Call it the smart storage revolution.

California is ground zero for this trend. Across the Golden State, costs for electric power are high, renewables are multiplying, and key grid links are overloaded. But rather than rely on longstanding industry practice to fix grid problems by building more power plants or transmission lines, California regulators are encouraging customers and utilities to innovate.

At two InterContinental Hotels in the Bay Area, new storage technology is helping to reconcile these many challenges. The hotels’ secret weapon is a pair of fridge-sized boxes loaded with lithium-ion (Li-ion) batteries. Day to day, they’re able to cut the hotels’ electricity costs by up to 15 percent.

They do so by taking advantage of California’s complex power pricing regime. Smart software recharges the batteries when power is cheap, typically at night. When rates head up, the system seamlessly switches part or all of the hotels’ load to the batteries, thereby avoiding the need to purchase power at the costliest times, explains Salim Khan, CEO of Stem, the Millbrae, Calif.–based startup that built the systems.

Stem energy storage units

New storage technology at two InterContinental Hotels in the Bay Area cuts the hotels’ electricity costs. Photo courtesy of Stem.

The technology helps the broader grid, too, by reducing the risk of outages. Grid gurus call it “peak shaving.” It’s a nifty trick in which stored energy displaces active generation during key moments. On the hottest days, even a tiny sliver of this kind of savings can make the difference between a blackout and business as usual.

For now, the InterContinental Hotels’ storage units are an exception, but they’re set to become a rule. In February, California became the first state to order investment in smart grid storage, initially calling for 50 megawatts (MW) in the Los Angeles basin area.

The rule doesn’t detail what kind of storage to deploy. Rather, it aims to spark more market innovations, like Stem’s, that improve grid performance while saving money. “This is a huge signal to the market that storage is ready to play” on par with conventional power plants, says Janice Lin, executive director of the California Energy Storage Alliance.

Goldilocks Storage

To be sure, storing electricity isn’t anything new. Cell phones, e-readers and laptops, all integral to daily life, let us use a little of the grid’s generation on the go.

Storage is well established at the macro scale, too. A little-known backbone of the U.S. grid is more than 20,000 MW — equal to the capacity of some 22 nuclear power plants — of “pumped hydro” storage. Scattered at scores of remote sites around the U.S., these systems comprise some 99 percent of today’s storage capacity.

At night, utilities use low-cost energy to pump water from a lower reservoir uphill to a higher basin. The next day, as demand peaks, the water is sent back downhill to generate power. Think of pumped hydro as the biggest battery we have. It’s capable of delivering city-sized volumes of power for hours in a row. That’s why grid operators are pushing to install thousands more megawatts of capacity.

But, as California is finding, on today’s grid, the sweet spot for smart storage is at scales somewhere between these two extremes. Much bigger than handheld phone batteries, but smaller than gargantuan lakes of hydropower, smarter storage solutions can be an ideal fit for critical niches where a midsize dollop of power, supplied for minutes or hours, is all that’s needed.

Sharing California’s Problems

California’s problems aren’t unique. Similar problems are surfacing across the U.S.

Transmission constraints. L.A.’s biggest problem isn’t inadequate supply of power — most of the time there’s enough juice available from regional generators. The real problem is funneling all that power through aged transmission lines that can’t handle the load.

In most markets installing new transmission cables, or even upgrading existing lines, is a no-go. As populations have grown, the cost of new grid links has skyrocketed; likewise, public patience with big construction projects is scant. New York City and Long Island face similar transmission constraints.

Storage offers a tidy solution. At night, utilities can use existing transmission lines to fill up batteries positioned near demand hot spots. Later, if demand peaks beyond power lines’ ability, batteries can fill in the necessary excess. “Energy storage helps you do more with less infrastructure,” says Bill Acker, executive director of NY-BEST, an energy storage technology consortium based in Albany, N.Y.

Waste and reliability. To meet peak levels of demand, the grid has been massively overbuilt. The Electric Power Research Institute estimates that one-quarter of all high-voltage distribution lines and about one-tenth of all power plants are used, on average, just 5 percent of the time, or 33 hours per month.

That means hundreds of billions of dollars in assets go unused the vast majority of the time. Utilities are recognizing that carefully targeted storage can cost-effectively replace these least-used assets, says Haresh Kamath, program manager for energy storage at EPRI.

“We’re not using most of our assets most of the time,” says Johannes Rittershausen, managing director of Convergent Energy + Power, a developer of energy storage assets. “The challenge is to find ways to address infrastructure needs most efficiently and at the least cost to end users. A well-designed energy storage project can do just that by taking advantage of slack capacity in targeted locations.

“The U.S. grid will require massive investment over the coming decades as infrastructure ages and our society’s peak electricity demand continues to grow,” he adds.

Renewables’ risks. As a rule of thumb, grid experts believe that when intermittent power sources such as wind and solar surpass 20 percent, grid instability soars. And where that threshold once seemed remote, it is routinely being surpassed in many regions.

Storage boosts the value of renewables in two ways: by stepping in to provide power when renewable output drops off, and by mopping up excess output when solar or wind power exceed demand.

In February of this year, wind output set a record in Texas, briefly cranking out more than 28 percent of the power demands of the Electric Reliability Council of Texas, which manages the flow of about 85 percent of the state’s electric power. California’s goals for renewables are the nation’s highest, with a mandate to hit 33 percent by 2020. Roughly 20 more states, home to the majority of the U.S. population, have set goals of 20 percent or more.

Storage boosts the value of renewables in two ways: by stepping in to provide power when renewable output drops off, and by mopping up excess output when solar or wind power exceed demand.

“For the first time, the growth of renewables means we’re facing unpredictable supply,” says Lin. Demand will also grow less predictable as more electric vehicles come on line. “Plus it’s hard to find locations for new plants or transmission lines,” Lin adds. “Storage speaks to all these problems.”

Top Contenders

A menagerie of exotic new storage technologies — including thermal storage, flywheels and compressed air storage — are developing fast, but haven’t yet achieved commercial-scale viability. For now, advanced battery-based storage is the hottest of the grid’s newcomers, thanks to rapid declines in the price of Li-ion batteries.

Serendipitously, a key impetus for this trend started in the auto sector, where rising sales of battery-packed hybrids and electric vehicles are driving carmakers’ appetites for advanced Li-ion batteries. This is spurring new manufacturing capacity, driving prices down globally. Driven largely by rising demand from car companies, industry and utilities, the global Li-ion market is slated to double over the next four years, to around $24 billion, according to a recent Frost & Sullivan report.

As prices fall, Li-ion batteries are finding new niches. For now, at around $2,000 per kilowatt-hour, battery backup remains too costly for most applications. But they do pencil out for deep-pocketed utilities in key situations, where their ability to deliver large pulses of power is highly valued.

In January, for example, Duke Energy completed a 36 MW energy storage system at its Notrees Windpower Project in West Texas. Designed and installed by Austin-based Xtreme Power with funding from the U.S. Department of Energy, the $44-million system is the world’s largest wind-linked storage unit, made up of thousands Li-ion battery cells.

According to financial service company UBS, the cost of storage dropped by 40 percent over the past two years, and analysts expect the slide to continue, or even accelerate. At around $500 per kWh, EPRI estimates more than 40,000 MW of potential demand will enter the market. At that price point, residential-scale battery backup may become a reality. Pilot trials of such household-scale backup appliances are underway near Sacramento, Calif.

Installed in the garages of 15 solar-powered homes, Li-ion battery packs the size of small file cabinets hold enough juice to supply a few hours of power. The systems are designed to let the homes go off grid during periods of peak demand, saving homeowners money while reducing stress on the network.

And at $250 per kWh, consulting firm McKinsey & Co. predictsautomakers will be able to build electric vehicles that would be competitively priced in comparison to conventional cars, but with much lower fuel costs. With an eye on a future where there’s a Chevy Volt, Nissan Leaf or the like in every garage, utilities and carmakers are beginning to test vehicle-to-grid systems where EVs’ big battery packs are enlisted to back up the grid.

Powering Ahead

Innovative startups and companies from outside the utility sector are leading the shift towards smart storage. It’s not that utilities won’t play a big role here, but historically they tend to follow, says GreenTech Media smart grid analyst Zach Pollock. “Utilities’ adoption of nascent technologies is typically constrained by cautious regulators, conservative cultures and long budget cycles.”

And maybe that’s okay. Smart storage offers a rare opportunity, says Rittershausen. “We can do a project that makes a profit, saves the consumer money and reduces inefficiency,” he adds. “If this is done right it makes sense for investors, end users and utilities too.”

~

Check out the original story here: http://ensia.com/features/can-smarter-storage-solve-our-energy-woes/?viewAll=1

 

How recovering water from fresh potatoes helps a PepsiCo chips factory turn off the taps | Corporate Knights

For PepsiCo, one of the world’s biggest makers of potato chips, the key to producing the crispiest chips possible is all about driving moisture out of raw potatoes. Paradoxically, though, potatoes are made up mostly of water.

At a Walkers Crisps factory in Leicester, England, PepsiCo is turning this soggy challenge into a water-saving innovation. The goal: to extract so much water from inbound spuds that the factory can go “off grid,” drawing little or no water from public taps. Doing so, PepsiCo hopes, will help save the plant roughly $1 million a year in avoided water costs.

“The idea for taking factories off the water grid came from a simple observation by our front-line teams that potatoes are 80 per cent water,” says Martyn Seal, PepsiCo’s European director of sustainability.

PepsiCo’s efforts to turn off the taps at its Walkers plant in the U.K. is one sliver of a bigger batch of initiatives to make its global operations run with less water. In 2010, the food-and-drink giant, which turned over $66.5 billion (U.S.) in sales last year, released its first comprehensive water report. The effort, similar to initiatives out of rival Coca-Cola, set out details of its water consumption alongside plans on how to use that water more productively.

One of its headline goals is to improve the efficiency of water use – measured by water consumed per unit of production – by 20 per cent by 2015, using 2006 as a baseline. PepsiCo hit this target last year, four years ahead of schedule. Another goal is to strive for “positive water balance” in water-distressed areas. This means for every unit of water PepsiCo uses, it strives to restore, replenish or prevent loss of the same amount or more in the same region. It also aims to provide access to safe water for three million people in developing countries before 2016.

Early on, potato-chip plants emerged as a juicy target for these goals. Making chips is surprisingly water intensive. In a normal year, some 350,000 metric tons of fresh tubers is shipped to the Leicester factory – the equivalent of some 13,000 tractor-trailer loads.

In the plant, potatoes are washed, peeled and sliced. A steady flow of H2O is used at each of these steps. In Leicester, this process demands roughly 700 million litres of water annually, the equivalent of roughly 280 Olympic-sized pools. Yet as crucial as water is while preparing the raw spuds, it’s an unwanted troublemaker thereafter. The thin slices are plunged for a few minutes into oversized fryers filled with oil boiling at 190°C (375°F). Water trapped in the potato slices vapourizes instantly, turning the otherwise inedible starch into an addictively crunchy treat.

In a conventional set-up, the cloud of steam that rises from these vats is vented out into the air. PepsiCo engineers recognized that the vapour represents a huge waste of both water and energy. To recover these wisps of moisture, PepsiCo fit a contraption onto the plant’s exhaust towers. Inside, the hot steam passes over a network of thin, cooled tubes. Moisture from the potato vapour condenses on the cooler tubes for easy collection. The process also recaptures traces of cooking oil from the exhaust. Both the oil and water can be reused. About four-fifths of the moisture that is normally lost is recovered.

Together with systems that recycle about two-thirds of the plant’s wastewater, the steam-recapture project is on track to supply enough water to hit PepsiCo’s goal of drawing zero freshwater in Leicester. The company is already testing the technology at similar sites in Holland and Belgium, part of a plan to extend these practices to other large European operations and, later, worldwide.

A successful pilot in the Leicester plant “will provide us with a technology suite that we will be able to reapply at other PepsiCo plants, particularly in areas of severe water scarcity,” Seal says. “This is an opportunity to realize meaningful cost savings while reducing our impact on the environment.”

Combined with other projects across PepsiCo’s operations, the steam-recapture efforts contributed to savings of $45 million in water and related energy costs last year, compared with the 2006 base when the company began these efforts. By volume, in 2011 it used 16 billion fewer litres of water, compared with 2006.

As much as PepsiCo execs crow about the bottom-line impact of these efforts, they point to strategic benefits too: The company must plan for operating risks that droughts pose to future operations. By 2030, global demand for freshwater could exceed supplies by 40 per cent, explains Dan Bena, PepsiCo’s senior director of sustainable development.

“If this gap is not closed, there will be no business as we know it today,” he says.

~

See the original story here: http://www.corporateknights.com/article/tech-savvy-pepsico

How SAP and cities are boosting innovation through open data | GreenBiz

Listening to a couple of coders gush over the virtues of gamification, location-based mobile services and open data standards, I might have mistaken the techies for sneaker-wearing pitchmen at a Silicon Valley hackathon.

But this was midtown Manhattan. Instead of B-school dropouts, the geeks in question were actually silver-haired civil servants in charge of the IT operations for Boston and Edmonton. Though centuries old, each of the cities is racing towards decidedly cutting-edge goals of opening up access to municipal data for their residents and businesses to use and commercialize.

“Successful cities of the future aren’t necessarily the most efficient. It’s about engagement and citizen empowerment,” said Bill Oates, chief information officer for the City of Boston. “Our innovation is on people, to help constituents connect with the city.”

The CIOs were brought together by SAP to mark the U.S. launch of the software giant’s Urban Matters program, which aims to help municipal governments “deliver better-run cities” by opening up data streams for citizens and business to tap into.

As this software space matures, big companies also are exploring opportunities to integrate city data feeds into current and future services.

GM is grooming its OnStar unit to become the software hub for transportation services such as RelayRides’ peer-to-peer car sharing service. The automaker recently issued protocols that will let third-party developers integrate the data beginning to flow from cities — such as road construction information, or parking data — into future OnStar services.

In the world of smart buildings, Johnson Controls is likewise eyeing the opportunities emerging by tapping into huge, public pools of data on the performance of buildings in Philadelphia, New York, San Francisco, Washington D.C. and other cities.

Back in Boston, SAP technology is powering the city’s Boston About Results website and accompanyingCitizen Insights iPhone app. Citizen Insights collects, analyzes and shares performance measures across scores of city departments from tree planting requests to fire response times.

By digitizing and opening up their data flows, most cities are trying to evolve into “better versions of themselves” rather than presume to compete with Silicon Valley, said Bruce Katz, vice president and director of the Brookings Metropolitan Policy Program at the Brookings Institution.

Boston and Edmonton are pushing to lead a growing contest among cities aiming to boost their competitiveness by opening access to city data streams. Fighting to transform decades-old bureaucratic processes that tended to lock up key city information — such as property records or tax rolls — in hard-to-access formats, the goal is first to digitize as much information as possible.

As these programs grow more ambitious, cities face an outsized data challenge in scaling up these efforts. With centuries’ worth of property records or historical budget information, cities are typically sitting on mountains of data that are a challenge to digitize, standardize and make accessible.

Cities need “additional investment to deal with the analytics…of taking tens of thousands of data sets and looking inside them,” said Theresa Pardo of the Center for Technology in Government at the University of Albany (State University of New York, SUNY). “A lot of cities still have their records in paper form.”

Pardo’s center recently released a white paper, The Dynamics of Opening Government Data. The paper offers practical advice for government managers pursuing open data initiatives.

Edmonton started its open data efforts with a dozen public datasets in 2010. Today, “We have 257… San Francisco has 250,” said Chris Moore, the City of Edmonton’s CIO. Moore is pleased to be edging out one of the U.S.’s most wired metropolises.

Digitizing the information sequestered in city offices is just the beginning of the battle. Making those data streams publicly accessible and easily useable is essential for developers to build new services and businesses.

In Edmonton, for instance, the city transportation department recognized an appetite for access to information on road closures, resurfacings and the like. When the dataset went live earlier this year, it quickly became the city’s most popular feed. “After it was released, an Edmontonian created an app called YEG Constriction,” Moore told IT World Canada.

As Edmonton’s efforts unfold, Moore’s IT team hopes to stay at the front of city efforts by exploring gamification and the immersive 3D web as ways to boost public interaction with the data. For instance, the city is readying a Facebook game around traffic and safety, Moore said.

Behind all the enthusiasm for data transparency are bottom-line benefits that please city bean counters. The shift towards open data standards can deliver a big bang for the buck at a time when cities face rising demands for data services, yet have fewer resources with which to develop them.

In Edmonton, the city hosted Apps4Edmonton.ca, a contest to develop apps for city residents and businesses. “For around $50,000 we developed dozens of apps,” says Moore. Were the applications developed conventionally, he speculated, the cost of a single study of the business case would have exceeded that figure.

Code sharing between cities can further compound these savings. Boston’s New Urban Mechanics initiative encourages public collaboration to develop innovative civic services, explained Oates, the city’s CIO. Among the program’s most popular apps is Street Bump, an iPhone app that helps detect and report potholes. As a result of these efforts, nearly every pothole complaint in Boston is resolved in two days or less. A couple of years ago, less than half were completed that quickly.

Now, Boston is extending and sharing its New Urban Mechanics platform with dozensof other cities and towns, where apps can be adapted or further customized.

To be sure, cities aren’t going to threaten Silicon Valley’s software titans anytime soon. But the afamiliar, infectious air of competitive innovation is developing in municipal software circles. Earlier this month, Emily Badger at The Atlantic Cities rounded up the best open data releases of 2012. From listings of green roofs in Chicago to bikeshares in Boston, the apps are promising examples of how smart software can transform existing, static city data into dynamic, interactive tools that promise to make cities greener and more efficient.

While incremental, the boom in city data apps highlights how metropolises are best positioned to push ahead with effective innovation. “Cities are a lot more pragmatic than state or national governments,” said Brooking’s Katz.

Illustration of key opening file folder provided by Artgraphics via Shutterstock.

~

Check out the original story here: http://www.greenbiz.com/blog/2013/01/03/how-sap-cities-boosting-innovation-data

Meet the Change Makers: How UPS Delivers Big Energy Savings | OnEarth

For UPS, the world’s largest package delivery company, no time of year is more challenging than the holiday season. This year, the Atlanta-based company predicts the surge of packages it handles between Thanksgiving and Christmas will exceed half a billion. That tidal wave will peak on December 20 when, on a single day, some 28 million cardboard boxes will be loaded into UPS’s iconic big brown trucks to be delivered, at a rate of roughly 300 per second, to homes and businesses around the world.

The challenge of getting those packages where they need to be using the least amount of energy possible falls to Scott Wicker, who was named UPS’s first chief sustainability officer in 2011. Like many of UPS’s top execs, Wicker is a lifer. He got his start in 1977 unloading UPS trucks while studying to become an electrical engineer. Some three decades later, it’s fair to say Wicker is still working in trucks. Yet today, as CSO, his mandate is to improve the efficiency of UPS’s entire fleet of 93,000-plus vehicles – which includes those brown vans, long-haul trucks, and cargo planes as well as gondolas and tricycles — along with the company’s global portfolio of more than 1,800 facilities.

True to his engineering roots, Wicker approaches this challenge quantitatively. Given that fueling the UPS armada generates more than 90 percent of the company’s carbon emissions, much of UPS’s sustainability efforts focus on its fleet, such as streamlining delivery operations, developing fuel-efficient technologies, and exploring alternative fuels. In 2011, those efforts helped reduce company-wide greenhouse gas emissions by 3.5 percent, even though total package volume grew by 1.8 percent, according to a 2011 report.

OnEarth contributor Adam Aston spoke with Wicker about how UPS has achieved these gains and become one of its industry’s top performers on sustainability.

If there’s a singular example of UPS’s focus on efficiency, it’s the left-hand turn rule in which delivery routes are designed for drivers to make as few lefts as possible. How did this come about?

It’s one of a long list of tweaks we’ve been making to drivers’ routes over the years. It goes back to the ‘70s. Back then, we saw that we were wasting a lot of time making left turns. The more time a van sits waiting to turn, the more fuel is burned idling.

Can you quantify the benefits of the rule?

Partly. It’s part of a broader set of efforts to eliminate idling. Last year we avoided 98 million minutes of idling. And less idling means less fuel burned. We estimate that this effort alone saved 653,000 gallons of fuel.

So fuel efficiency is as much about how vehicles are driven, as what fuel they use or how the vehicle is designed?

Yes, some of the biggest changes to our fleet operations are the least visible. Last year, for example, we estimate we avoided driving nearly 90 million miles thanks to improvements in routing and package-flow technologies. That translates into more than 8 million gallons of fuel not burned. Our technologies determine how to load each package and where each one goes on a specific shelf in the truck.

We’re also developing the ability to adjust routing on the fly. If the driver has to veer off a route for any reason, the system can recalculate the optimal delivery sequence. Further, the system will help the driver to mix more urgent, early-morning deliveries in between less urgent deliveries with later time commitments. In the past, this hasn’t been possible — instead, all urgent packages are delivered first, regardless of lost opportunities to deliver another package nearby.

It may sound minor, but these changes can help reduce the number of miles each driver travels each day. When you multiply a few miles saved per driver per day, the aggregated savings in time, fuel, and carbon are significant.

That said, is the push for a high-mileage truck still a top priority?

Yes. With more than 90,000 vehicles, it’s a constant concern. Our fleet of alternative-fueled vehicles is the largest in the industry, and one of the most diverse. Since 2000, some 2,500 unconventional UPS vehicles have racked up over 200 million miles in service.

Many are powered by natural gas, which we’re looking to as an alternative to diesel. For example, more than 900 local delivery vans are powered by compressed natural gas (CNG) in the U.S., and almost that many vehicles in Canada are powered by propane [a close relative of natural gas]. For long distances, we also have about 59 big rigs — highway tractor-trailers — powered by liquefied natural gas (LNG).

Rounding out the alternative fleet are 381 hybrid electric models that, similar to Toyota’s Prius, use a combination of combustion, electric motors, and battery storage to boost mileage. Because they recapture so much of their energy through regenerative braking, these models are especially well-suited to urban routes, where total miles travelled is short, with many stops and starts, and pollution control is important. We’re also running a small number of ethanol-powered vehicles and pure electric vehicles, which run solely on power stored in their batteries.

We’re also excited to announce that starting this month, we’re rolling out 40 hydraulic hybrid delivery vehicles. This is a continuation of a program we piloted with the Department of Energy and other partners in 2006. Instead of storing energy in a conventional battery, these vehicles use hydraulic fluid as the storage medium. When the vehicle accelerates, some of this stored pressure helps it to start moving. During braking, the process works in reverse: the vehicle’s momentum is converted into pressure to recharge the hydraulic tanks. It’s a remarkably rugged system that can save up to 40 percent of fuel.

Why pursue so many kinds of technology?

We’d like to get off of fossil fuels. That’s our goal. Our approach is holistic because there is no silver bullet. It would be foolish to try to predict which fuel will emerge as the best or most durable.

Can you squeeze greater savings from your conventional diesel trucks?

Yes. One of the things we’re most excited about is “lightweighting.” Last year, we rolled out a test truck that looks similar to our regular delivery van, but that’s built with advanced materials that shave off 900 pounds. There are body panels made of lightweight plastic composites instead of metal sheets. Because the vehicle is so much lighter, we’re able to use a smaller engine, as well.

The trucks deliver approximately 40 percent gains in fuel efficiency, and the price is in line with the cost of a conventional vehicle. Based on that trial, we ordered 150 of these higher-mileage models. We’re also more comfortable with composite material and will consider adding more composite components into larger vehicle types.

UPS operates a lot of vehicles consumers rarely see, from planes to long-haul trucks. What are you doing with these?

To put this in perspective, more than half of UPS’s carbon dioxide emissions come from jet fuel, and the rest of our mobile fleet make up about a third of emissions.

For surface transportation, we shift as much as possible to rail, which is a far more efficient way to move goods than road. For rail and air, the efficiency options are fewer than on the road. With planes, we’re testing more efficient flight paths. Simplifying a jet’s landing pattern, by letting it glide down continuously rather than descending in a step pattern, delivers substantial savings. We’re also testing aviation biofuel. We know it works. The problem is making it at the right price.

Are your customers asking for data on the carbon impact of their shipping?

Customers began to push for this kind of data a few years ago. Big companies are facing more pressure from groups like the Carbon Disclosure Project, the federal government, and financial entities to report on their carbon footprints.

It’s been a challenge to build a system that collects all this data. But today, we’re one of the few logistics providers that calculate Scope 3 emissions, which often comprise a very large share of the total. These are the emissions produced indirectly to make goods or deliver services a company buys. [Ed. note: Scope 1 emissions are created from direct actions, such as fueling a UPS truck. Scope 2 are emitted indirectly, such as the emissions associated with electricity bought by a UPS utility. Find out more here.]

When we ship for a company, or handle its logistics, UPS becomes a major source of the company’s Scope 3 emissions. Delivering that data reliably is a very sophisticated process. Our experience developing these measures has helped us advise partners on their efforts to map out their own Scope 3 emissions, too.

Have UPS’s sustainability efforts helped attract customers?

Yes. UPS is the only U.S.-based company offering a carbon neutral shipping option across all product lines. Puma, for example, ships everything carbon neutral. Toto [a Japanese bathroom fixture maker] uses the service, too. Another example is LiveNation, which organizes touring bands. We ship of all the bands’ gears in our trucks, and, in some cases, have begun to manage transport for those tours in a carbon neutral manner.

~

Originally published at http://www.onearth.org/article/meet-the-change-makers-how-ups-delivers-big-energy-savings

Writer, editor, content advisor, creative leader – energy, climate | Chief storyteller at RMI | Co-founder of T Brand at The New York Times