What lifestyle is ‘normal’?

While most of our posts are intended to be informative, this one will be short and involve humorous introspection. How is our lifestyle different from that of our grandparents? Their parents?  What do we take for granted?

This short video clip from a Conan O’Brien show highlights aspects of many of the above points and gives us pause as we consider how our future choices will determine what the next ‘normal’ is.

Natural Gas – Short Term Solution for Problems We Cause?

(This is a guest article from Bill Sepmeier at www.mrsunny.org)

Long relegated to special-interest Peak Oil web sites, more data indicating that the end of oil’s reign as the world’s energy source is at hand hits the media each day now.

The US Energy Information Administration (EIA), predicts that “global natural gas consumption will treble by 2030, when gas will become the primary energy source for industrial and public needs.” Given oil’s present rate of decline, it will be the only fossil energy source left for these applications. According to the Dow Jones Newswire, “Companies such as Exxon Mobil Corp.,Royal Dutch Shell PLC and ConocoPhillips are making the transition from dealing mostly in oil, a commodity that’s increasingly scarce and difficult to produce, to natural gas, a fuel that’s suddenly become ubiquitous. Their profits, which can reach unfathomably high levels during good times, are likely to go down accordingly, executives and analysts say. So will the companies’ adventurous ventures in the deep waters and in dangerous regions of the globe. The industry now needs to learn the shale gas business, which has played a major role in spurring the transition and which can only be profitable if it runs like an assembly line. “It’s a manufacturing industry, and it’s not at all what we’re doing at Total, ” said Patrick Pouyanne, senior vice president for strategy business development at Total S.A., which recently entered into a shale gas joint venture in the U.S. and is acquiring shale positions in France, Denmark, Argentina and North Africa.

The observation by Total that natural gas “can only be profitable if it runs like an assembly line,” makes note to the fact new shale gas wells must be drilled constantly, since while their production is prolific at first, it is brief (80% declines in the first year are common, with 30% declines in the second year from that lower level). Gushing industry statements about “100 to 300 year supplies” are not based on these observed declines, since the technology of horizontal drilling and hydraulic fracturing is simply too new to provide actual knowledge of actual long term average “frac” well production capability. There are also potentially serious issues with groundwater pollution that may be caused by these new hydraulic shale fracturing processes which have yet to be investigated due to rules passed during the Bush administration which basically exempted shale gas drilling from EPA regulation.  No matter – given declining supplies of oil and geographically-limited coal reserves, it appears the decision has been made to make natural gas into the replacement for oil and coal in ground transportation and baseload electric generating plant energy.

Due to the cost of new gas well drilling and the fracturing processes involved and the fact that this process must remain ongoing (the “assembly line” technique), to continue to free trapped shale gas deposits, the wholesale cost of natural gas must be above $6 per million BTU for any profits to permit further exploitation. Presently, the perceived abundance of shale gas and the recession have lowered prices to under $4 MMBTU and drilling has declined dramatically over the past year. Since the IEA announcement doubling its estimate of global gas reserves, gas price futures have continued to fall.

Natural gas profits are far lower than oil profits. The funds needed to cover the costs of converting the nation’s automotive and trucking fleet and its national refueling infrastructure from a well-established, room-temperature, atmospheric-pressure-stable liquid fuel to LNG, liquefied natural gas,  which requires high pressure (2400psi) transport and storage from source to end use in transportation, will have to be paid by someone. These multi-billion dollar new infrastructure requirements will increase the cost everything, from vehicles themselves to their fuel. Transportation costs will increase significantly over the coming years regardless of the oil or LNG mix. Until a large demand for liquefied natural gas is established, the low wholesale cost for gas will make tapping these estimated reserves possible only with large subsidies. Since the price for the world’s remaining oil will climb ever higher as its production declines, the transnational oil firms now becoming heavily involved with gas development should be able to afford this re-investment, though you can expect them to be asking for and receiving ever more government handouts and tax breaks to offset their costs of production.

Due to it’s lower energy density, natural gas has little present application in powering global ship transport, nor can it be used in the aviation industry. LNG, burned optimally at sea level, delivers 75,000 BTU/gallon, while jet fuel and bunker oil offer 128,000 -140,000 BTU/gallon . LNG delivers a bit more than half of the energy per gallon of conventional fuel. On top of this, oil products are transported and stored without special processing; they’re liquid and stable at room temperature and pressure. Each gallon of LNG must be compressed to and maintained at 2400 psi. The amount of embedded energy required to simply compress natural gas at the scales required to replace oil is not insignificant.

Here’s the deal:  If natural gas is burned to make electricity at a baseload utility generation plant, electricity which is used to compress natural gas into LNG, only about 30% of the energy of the natural gas burned at the utility electric power plant will be converted into electricity (it’s that damned 2nd Law of Thermodynamics again).  The rest of the energy is lost as waste heat, little of which is used in modern utility plants, since they’re located far from central cities which could use the heat in a “steam district.”  Of the electricity actually generated, about 18% is lost in transmission lines and transformers, again as heat, before it gets to the LNG compressor.  The compressor motor loses another 20% of the energy it consumes since electric motors are only about 80% efficient.  The final LNG product, while delivering 75,000 BTU/gallon, is in reality a much less efficient actual energy delivery system than the oil-based system it has been deemed to replace, since the embedded energy losses involved in making it are huge,    and we’ve not included the further losses in transportation and delivery of LNG, nor the energy required to convert the world’s transportation system from oil to the high-pressure reality LNG requires.

Globally, the world’s largest reserves of natural gas are in … wait for it … Iran. According to Canada’s Global Research, “Within the Middle East, Iran is the undisputed top holder of gas reserves. Its South Pars gas field is the world’s largest. If converted to barrel-of-oil equivalents, Iran’s South Pars would dwarf the reserves of Saudi Arabia’s giant Ghawar oilfield. The latter is the world’s largest oilfield and since it came into operation in 1948, Ghawar has effectively been the world’s beating heart for raw energy supply. In the soon-to-come era of natural gas dominance over oil, Iran will oust Saudi Arabia as the world’s beating heart for energy. The scheduled start of drilling this month by China National Petroleum Company (CNPC) in Iran’s South Pars gas field could be both a harbinger and explanation of much wider geopolitical developments. The $5 billion project – signed last year after years of foot dragging by western energy giants Total and Shell under the shadow of US-led sanctions – reveals the main arterial system for future world energy supply and demand.”

“Critics have long suspected that the real reason for US and other western military involvement in Iraq and Afghanistan is to control the Central Asian energy corridor. So far, the focus seems to be mainly on oil. But the CNPC-Iranian partnership shows is that natural gas is the bigger prize that will be pivotal to the world economy, and specifically the dual flow of this fuel westwards and eastwards from Central Asia to Europe and China.”

The only things really known for certain are:  Gas, while “cleaner-burning” than oil (and coal) due to its lower number of hydrogen-carbon bonds, which serve up not only less energy but less carbon dioxide, still increases carbon levels in the atmosphere. To obtain the same amount of transportation energy, more gas or LNG must be burned by volume than oil (~58% more, to equal a gallon of gasoline’s energy) and a lot of energy will be required to compress and process the gas into LNG, further lowering the overall efficiency of gas as an oil replacement.  “Clean burning gas” offers some advantage to oil environmentally, but on a mass scale, atmospheric carbon levels will continue to rise, since the world’s transportation system will continue mining and burning stuff that has been sequestered in the earth for a hundred million years, releasing its carbon content into the air.

Sadly, the massive push behind natural gas development worldwide will likely again slow down the further development and mass deployment of carbon-free renewable energy technology.  Every delay in the deployment of real renewable energy technology pushes this deployment further into a more energy starved future, lowering the probability that enough renewable energy technology can and will be deployed to provide continuation of modern civilization on a mass scale.  Renewable energy sources in the modern sense all require a lot of embedded fossil energy in their own development, manufacture, transport to market and deployment – as these fossil sources grow more expensive and unreliable in availability, the window to deploy them in renewable energy manufacturing grows smaller, since competition for these ancient BTU will only increase.

All things considered, the accepted paradigm of easy, fossil energy powered exponential economic growth, and the corresponding exponential population growth this energy made possible, still appears to be ending rapidly since, on the scale needed to replace oil, natural gas and its processing and infrastructure requirements as a transport fuel won’t come cheap enough, nor be widespread enough, to offset the collapse in both cheap, reliable oil production and the mass civilization that has grown up around cheap, reliably-available oil.

(c) 2010 Bill Sepmeier

Pliocene

The warmer climate facilitates hurricane activity. This amounts to a positive feedback, which can potentially lead to multiple climate states – one with permanent El Nino-like conditions and strong hurricane activity and the other corresponding to modern climate with a cold equatorial Pacific.” Fedorov, et al., 2010.
The Pliocene epoch spans 3 million years of Earth’s history between 5.4 and 2.4 million years ago. In the early Pliocene our earliest pre-human ancestors discovered so far Ardipithecus ramidus appeared. And the first humans Homo habilis and possibly even Homo erectus were around at the end of the Pliocene for the transition into the ice ages of the Pleistocene. While Homo habilis died out long before the appearance of Homo sapiens sapiens (us) Homo erectus was still around by the time we evolved.
At the beginning of this epoch, the Mediterranean Sea evaporated and remained dry for about 170,000 years. The African continental plate had been colliding with the European continental plate since about 85 million year ago when the ancient Mediterranean was still the Tethys Ocean. Five million years ago the African continental plate slid under Spain uplifting it and causing the Mediterranean to be cut off from the Atlantic Ocean [Govers, 2009]. Since the Mediterranean Sea loses more water to evaporation than is supplied by all the rivers which feed into it, in a few tens of years it had virtually dried up forming a deep hole some 3 miles below sea level at its deepest point. The average depth of the Mediterranean is about 1 mile. Imagine some pre-human following the edge of the Mediterranean Sea and all of its bounty a mile or so below sea level, perhaps a successful strategy for tens of thousands of years and thousands of generations. When the Atlantic finally breached the Gibraltar dam the flooding must have been dramatic catching millions of animals unaware.
But the most interesting thing about the Pliocene is its climate. It turns out that the Pliocene epoch is the best analog for the current Earth climate of all the 4.55 billion year history of our planet. The sun’s luminosity was nearly the same as it is today, the atmospheric carbon dioxide level was between 300 and 400 parts per million by Volume (ppmV), and the continents were in approximately the same location. We have discussed the faint young sun in a previous article [climate factors]. And if you recall, our sun has been steadily increasing in luminosity as the original hydrogen in its core has been fusing into helium. Because of this, during the Pliocene the solar forcing may have been about 0.2 W/m2 less than today or about the same as during the little ice age [Wang, 2005, Krivova, 2007]. Despite the slightly cooler sun, the early Pliocene was 4oC warmer than today and the mid Pliocene was about 2oC warmer. The carbon dioxide forcing had to account for the warm climate and the slightly cooler sun. This is an enigma since the current estimate for the equilibrium climate sensitivity, or the amount that the temperature would increase with a doubling of atmospheric carbon dioxide, i.e, to about 560 ppmV is about 3oC. Atmospheric carbon dioxide levels today are about 390 ppmV, or at the upper end of the Pliocene values, yet the Pliocene climate was hotter than the accepted equilibrium climate sensitivity would predict.
While the continents were nearly in the positions they are in today, there were some differences. The difference which probably affected the climate the most is that the Isthmus of Panama between North and South America did not close the connection between the Pacific and the Atlantic Oceans, the Central American Seaway, until about 3 million years ago [Murdock, 1997]. This closure impacted ocean circulation of heat.
Recently a new paper suggests another feedback mechanism associated with increased warmth in the early Pliocene. Fedorov et al propose that increased hurricane activity contributed to the warm climate as part of a positive feedback mechanism that maintained the warmth with permanent El Nino-like conditions [Fedorov, 2010]. In a review article, Ryan Sriver writes “These results may provide clues to understanding not only the climate of the early Pliocene, but also the nature of future climate change in a greenhouse world.” [Sriver, 2010]
Perhaps the most important difference, not discussed in the Fedorov paper, is that the Earth’s climate had been gradually cooling since the hot house Eocene 50 million years ago whereas our climate is recovering from an ice age. The last glacial maximum was only 20,000 years ago. I asked Kerry Emanuel of MIT and a co-author of the Fedorov paper about this and he replied to me: “Although CO2 levels were similar then to today’s, that climate had plenty of time to equilibrate to the forcing whereas ours clearly has not. It is also plausible that the climate exhibits hysteresis and multiple equilibria, so that approaching 370 ppm of CO2 from a warmer state may yield a different climate than approaching it from a colder state.”
James Hansen of NASA GISS, has suggested that the accepted value for the sensitivity of the Earth’s climate only accounts for fast feedbacks such as increasing water vapor and not very slow feedbacks. Hansen suggests that equilibrium climate sensitivity might be closer to 6oC [Hansen, 2008] when slow feedbacks are accounted for. A related aspect is that most of the trapped energy is currently warming the oceans as shown in figure 1 and it takes a very long time for these bodies of water to warm up or cool down [Murphy, 2009].

"ocean heat content"

Figure 1: Total Earth Heat Content anomaly from 1950 (Murphy 2009). Ocean data taken from Domingues et al 2008. Land + Atmosphere includes the heat absorbed to melt ice.

What we can appreciate from a study of the Pliocene climate is that equilibrium climate sensitivity may be higher than the consensus view and we may see an unexpected increase once the oceans warm up or equilibrate to the new higher level of carbon dioxide and further that the climate may change states from the current state where we experience an El Nino event every 3-8 years to a permanent El Nino state which may be self sustaining.
To view maps of the locations of continents in the Earth’s past see Chirstopher R. Scotese’s fascinating web site

.
Tony Noerpel

references
Fedorov, A. V., Brierley, C. M., and Emanuel, K., Tropical cyclones and permanent El Nino in the early Pliocene epoch, Nature, Vol. 463, February 25, 2010, 1066-1070.
Govers et al. Choking the Mediterranean to dehydration: The Messinian salinity crisis. Geology, 2009; 37 (2): 167 DOI: 10.1130/G25141A.1
Murdock, T. Q., A. J. Weaver, and A. F. Fanning (1997), Paleoclimatic response of the closing of the Isthmus of Panama in a coupled ocean-atmosphere model, Geophys. Res. Lett., 24(3), 253–256.
Wang, Y.-M., J. L. Lean, J. L., and Sheeley, N. R. Jr , Modeling the sun’s magnetic field and irradiance since 1713, The Astrophysical Journal, 625:522–538, May 20, 2005
Krivova, N. A., Balmaceda, L., and Solanki, S. K., Reconstruction of solar total irradiance since 1700 from the surface magnetic flux, Astronomy and Astrophysics, Volume 467, Number 1, May III 2007, 335 – 346.
Hansen, J., Sato, M., Kharechal, P., Beerling, D., Berner, R., Masson-Delmotte, V., Pagani, M., Raymo, M., Royer, D. L., and Zachos, J. C., Target Atmospheric CO2: Where Should Humanity Aim?, The Open Atmospheric Science Journal, 2008, 2, 217-231.
Murphy, D. M., S. Solomon, R. W. Portmann, K. H. Rosenlof, P. M. Forster, and T. Wong (2009), An observationally based energy balance for the Earth since 1950, J. Geophys. Res., 114, D17107, doi:10.1029/2009JD012105.
Sriver, R., Tropical cyclones in the mix,” Nature, Vol 463, 25 Februrary, 2010, 1032-1033.
http://www.scotese.com/
http://www.archaeologyinfo.com/ardipithecusramidus.htm..
http://www.archaeologyinfo.com/homohabilis.htm
http://www.esd.ornl.gov/projects/qen/pliocene.html
When the Mediterranean Sea dried up lat Miocene
http://www.sciencedaily.com/releases/2009/02/090211122529.htm
early hominids
http://anthro.palomar.edu/hominid/australo_1.htm

CHERNOBYL: CONSEQUENCES OF THE CATASTROPHE FOR PEOPLE AND THE ENVIRONMENT

Book Review

Authors

Alexey V. YABLOKOV
Vassily B. NESTERENKO
Alexey V. NESTERENKO

Consulting Editor Janette D. Sherman-Nevinger

Chernobyl: Consequences of the Catastrophe for People and the Environment is written by Alexey Yablokov, Vassily Nesterenko and Alexey Nesterenko. The senior author, Dr. Alexey Yablokov was State Councilor for Environment and Health under Yeltsin and a member of the Russian Academy of Science – since then he receives no support. Yablokov is an Honorary Foreign Member of the American Academy Art and Science (Boston.) Dr. Vassily Nesterenko, head of the Ukrainian Nuclear establishment at the time of the accident, flew over the burning reactor and took the only measurements. In August 2009, he died as a result of radiation damage, but earlier, with help from Andrei Sakarov, he was able to establish BELRAD to help children of the area. Dr. Alexey Nesterenko is a biologist/ ecologist based in Minsk, Belarus. Dr. Sherman-Nevinger is a physician and toxicologist and adjunct professor in the Environmental Research Center at Western Michigan University.

The authors abstracted data from more than 5000 published articles and studies, mostly available only in Slavic languages and not available to those outside of the former Soviet Union or Eastern bloc countries. The findings are by those who witnessed first-hand the effects of Chernobyl. This book is in contrast to findings by the World Health Organization (WHO), International Atomic energy Agency (IAEA) and (United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) who based their findings on some 300 western research papers, and who found little of concern about the fallout from Chernobyl.

The explosion of the fourth reactor of the Chernobyl nuclear power plant on April 26, 1986 was hundreds of times larger than the radioactive contamination from the bombs dropped on Hiroshima and Nagasaki.

While the most apparent human and environmental damage occurred, and continues to occur, in the Ukraine, Belarus and European Russia, more than 50 percent of the total radioactivity spread across the entire northern hemisphere, contaminating some 400 million people.

Based on 5000 articles, by multiple researchers and observers, the authors estimated that by 2004, some 985,000 deaths worldwide had been caused by the disaster, giving lie to estimates by the IAEA and World Health Organization.

Of life systems that were studied – humans, voles, livestock, birds, fish, plants, mushrooms, bacteria, viruses, etc., with few exceptions, were changed by radioactive fallout, many irreversibly.

Increased cancer incidence is not the only observed adverse effect from the Chernobyl fallout – noted also are birth defects, pregnancy losses, accelerated aging, brain damage, heart, endocrine, kidney, gastrointestinal and lung diseases, and cataracts among the young.

Children have been most seriously affected – before the radioactive Chernobyl releases, 80% of children were deemed healthy, now in some areas, only 20% of children are considered healthy. Many have poor development, learning disabilities, and endocrine abnormalities.

Why should we read this book? Because there are plans to build new reactors at Lake Anna in VA, and Calvert Cliffs in MD, and the 104 existing power plants are releasing isotopes 24/7 and their infrastructures are deteriorating. It is a matter of time before we have a new catastrophe.

Below is the New York Academy of Sciences site for the book:

http://www.nyas.org/Publications/Annals/Detail.aspx?cid=f3f3bd16-51ba-4d7b-a086-753f44b3bfc1

The listed price is $150. but my colleagues have negotiated a price of $40/ copy. We can supply this book to you. Please see contact information below.

Janette D. Sherman, MD is a physician and toxicologist in Alexandria, VA. She has done research and published scientific data about chemicals and nuclear radiation that cause cancer and birth defects. Her web site is:
http://www.janettesherman.com and e-mail is: toxdoc.js@verizon.net

Loudoun County Regional Science and Engineering Fair

Energy and Environmental Sustainability Awards

The goal of science is to make sense of the diversity of nature.” John Barrow, New Theories of Everything, 2007. 

The basis for the definition of taxa has progressively shifted from the organismal to the cellular to the molecular level. Molecular comparisons show that life on this planet divides into three primary groupings, commonly known as the eubacteria, the archaebacteria, and the eukaryotes.” Carl Woese, et al., Towards a natural system of organisms: Proposal for the domains Archaea, Bacteria, and Eucarya, Proc. Nati. Acad. Sci. USA, Vol. 87, pp. 4576-4579, June 1990, Evolution.

Exploration of [Titan] is of high interest because much of the chemistry going on in the atmosphere and on the surface may give us insight into organic chemistry on the earliest Earth.”  Jonathan Lunine, Earth, Evolution of a Habitable World, 2000.

Sustainable Loudoun launched the Energy and Environmental Sustainability Awards in 2007 using a donation from a local group.  Since 2008, REHAU Corporation has financed the award and Mike Maher of REHAU and I have judged the student projects.  In the last couple of years John Hunter of Lovettsville has been a third judge.  Every year we have been impressed with the competence and creativity of the students.  It has always been difficult to select winners from among so many deserving entries but a complete joy reviewing these projects with such promising and remarkable students.  Last year we introduced an honorable mention category to acknowledge what we considered the best freshman entry. 

The awards ceremony will be on Earth Day, April 22 at 7 PM at REHAU in Leesburg.   Featured speakers are Edgar Hatrick, Superintendent of Loudoun County Public Schools, Martin Ogle,  Chief Naturalist for the Northern Virginia Regional Park Authority, and Meghan Chapple-Brown, Director of the Office of Sustainability at The George Washington University.  Open and free adminssion.

There is an accidental or unintended theme to this year’s winning projects.  Within the sustainability literature, description of opportunities for cooperation with nature as opposed to competition with nature, are plentiful.  Life forms have been suggested for remediation of ocean dead zones, rebuilding damaged soils, processing sewage and generating biofuels and of course many more applications have been proposed.  Many farms in Loudoun County use organic principles taking advantage of natural nitrogen fixers, dung beetles and worms and other natural composters.  In order to take advantage of this technology it is necessary to understand the metabolism and evolution of critters.  Our first place winner, Danyas Sarathy a Freedom High School freshman analyzed the database of cellular metabolism to construct a tree of life identical to the evolutionary tree constructed by Carl Woese, the discoverer of Archaeabacteria.  Archaeabacteria are extremophiles.  If life exists anywhere else in the solar system, Mars, Titan or Europa, it will need to be similar to these extremophiles.  Our third place winner, Heather Quante, a Loudoun Valley High School Senior, analyzed the possibility of such organisms surviving on Saturn’s moon Titan.  Our second place project is a team effort of Anita Alexander, Broad Run High School Senior and Hannah Arnold, Loudoun County High School Senior.  They conducted some experiments on a possible practical application of our knowledge of organism metabolism using bacteria, Ralstonia eutropha, to produce plastic precursors. 

Our honorable mention winner goes to Adithya Saikumar, a Briar Woods High School freshman.  Adithya’s project demonstrates good research technique and we look forward to seeing what Adithya accomplishes next year.

Project abstracts:

First Place:

Comparative Metabolomics: Construction and Analysis of Eukaryotic and Prokaryotic Metabolomes  Danyas Sarathy 1305F09 Freedom High School Freshman

Metabolomes are comprised of cellular metabolites which are small molecules of intermediary metabolism.  Biological databases like KEGG, PubChem and Metacyc contain information on human, animal, plant and microbial genomes that have been sequenced and annotated.  Also, information on the functions of gene products, particularly on the enzymes of all pathways of intermediary metabolism is also provided in these public domain databases.  In this study, the reactions catalyzed by the pathway enzymes in terms of reactions involving substrate and product molecules were data mined.  Extraction of these metabolites and compiling them for representative organisms allowed comparative analysis of metabolomes within and among the different groups.  A defined set of metabolites were found to be present in all metabolomes which could be called the core metabolome, containing mostly the basic blocks of amino acids and nucleotides for Protein and DNA synthesis.  A number of correlations indicated that heterotrophs in general possess wider metabolic capabilities than the autotrophes that is reflected in the size of the metabolomes.  Furthermore, clustering analysis of these metabolomes using multi-variate statistical analysis package (MVSP) enabled the construction of a tree of life that displayed the discrete segregation of diverse organisms info groups of animal, plant, fungi, protest, bacteria and archaea.  This metabolome-based tree of life is a novel and alternative approach to the classical phylogenetic construction of tree based on small subunit ribosomal RNAs of diverse organisms.

Second Place:

Waste Products as Growth Media for the Accumulation of PHAs – Anita Alexander, Broad Run High School and Hannah Arnold, Loudoun County High School.

Polyhydroxyalkanotes, or PHAs, are biodegradable thermoplastics produced by certain bacteria under stressed conditions.  However, the production and extraction of this polymer is currently an expensive process.  The goal of the research is to lower the cost and impact on the environment of the process by using waste products for the main carbon source.  The specific objective of the research was to determine which of several waste products is most effective as the carbon source to be used in the fermentation medium.  Ralstonia eutropha was grown in a nitrogen-limited fermentation medium containing an excess of carbon in the form of waste products, such as a dead leaf slurry and seeds.  The growth of the cells was measured, and the polymer was subsequently extracted by first treating the bacteria with methanol, and then adding 30 mL acetone for 24 hours. Comparisons were made between the growth rates of bacteria in the different media, as well as between the dry weight of the extracted polymer.  Slightly lower growth rates have been observed in those trials utilizing a waste product as a carbon source, as compared to the control group, however the final product is comparable and makes use of materials that would otherwise be thrown away.  By reducing the overall cost of this process, the economic viability is increased, thus giving the product potential as an environmentally beneficial replacement for petroleum-based plastics.

Third Place:

Modeling Populations of Organisms on Titan – Heather Quante, Loudoun Valley High School.

Exploration of the solar system has yielded data on many different environments that may be hospitable to life, including Titan, one of Saturn’s moons.  Some organisms that live in extreme environments on Earth, called extremophiles, live in conditions such as extreme cold which are similar to those on Titan.  The purpose of this project is to use mathematical modeling to ascertain whether organisms that had evolved characteristic similar to extremophiles would be able to survive in Titan’s environments.  Last year, data from previous studies of extremophiles were collected to quantify growth rate under different environmental conditions, and specific data on Titan’s environments were gathered using Cassini-Huygens mission data.  These two data collections were then used in Excel to create a logistic population model whose equation describes how extremophiles would be affected when subjected to an environmental condition, such as temperature.  This year, the project was moved into the Mathematica program to create a logistics model that would simultaneously calculate the effects of changing temperature, pressure, and pH on the growth rate of the population.  Additionally, the model was made more accurate by taking into account the population’s constantly changing effects on its environment.  In the end, when Titan-like environmental conditions were chosen for the model, it could determine whether an extremophile population would grow and sustain itself or die out over time.  The model’s final results predicted that possibilities for life on Titan’s surface are slim, but a small, slow-growing, stable population may survive in Titan’s underground ocean environment.

Honorable Mention:

Nano-Tech Powering Green Movement – Adithya Saikumar, Briar Woods High School.

Ever think of driving a car without stopping for gas?  That is possible with hydrogen fuel cell powered automobiles.  It is the future of renewable energy, but the problem is its efficiency.  This experiment focuses on trying to increase the efficiency of such a car.  Using Silver nano-particles (IV), the experiment tests to increase the time the car runs (DV) on a single charge.  The time the car runs without the nano-particles serves as the control.  15 trials were taken for both the IV and the control group.  The time the car ran and the hydrogen/oxygen produced were recorded.

The control group timing averaged 326.33 seconds.  The IV group averaged 343.33 seconds.  The amount of hydrogen/oxygen produced for the control and IV groups were 15.23/11.06 and 19.06/9.9 mL respectively.  The number produced by a T test indicated that the results were not significant.

The alternative hypothesis stated, “if one added nano-particles to fuel cells, the efficiency will increase.”  Based on the results found, the hypothesis was supported.  However, the T test reported the results were not significant.  More trials may have helped.  The independent variable influenced the dependent; there was a 17 second increase in the IV group timing.

Further research could explore on why nano-particles increased hydrogen production but not oxygen production.  This experiment was done in a small scale.  It proved that nano-particles increase the efficiency.  If could be extended to full size cars.  This experiment is a contribution to the future of fuel cells as a source of renewable energy.

Tony Noerpel

Passive Solar Design Overview: Part 2 – Heat transfer and the absorber

(Follow this discussion at the Sustainable Loudoun forum under Passive Solar)

In Part 1 of this series, we looked at the three main architectural styles of passive solar design (Direct Gain, Indirect Gain, and Isolated Gain), as well as the first of the five design aspects, Aperture. This article will address the next design aspect, Absorber, at an overview level, beginning with a short introduction in heat transfer basics, so that the reader understands the fundamentals of building heat gain and loss, all of which are as equally important for renovation as they are for new construction.


Heat Transfer Basics

Heat can be transferred from one mass to another by;

  1. Conduction: Transfer of heat energy resulting from differences in temperature between contacting adjacent bodies or adjacent parts of a body (i.e., put your hand on a warm stove). Heat travels through walls via conduction.
  2. Convection: The natural tendency for a gas or liquid to rise when it comes in contact with a warmer surface (i.e., gliders and soaring birds seek rising thermals over sun-drenched land surfaces). For example, interior air will convect upwards from warm thermal mass areas and downwards alongside cool window or wall surfaces.
  3. Radiation: When one object warms a cooler non-contacting object (i.e., what you feel on your face as you sit in front of a fire or on a sunny beach). Hotter objects will transfer heat to cooler objects within direct line of sight. A person standing near a warm thermal storage mass will feel more comfortable than standing near a poorly insulated wall or window.

Figure 5 shows how these heat transfer types are experienced by windows (and walls, except for transmitted radiation).

Conduction, Convection, and Radiation
Figure 5 – Heat Transfer by Conduction, Convection, Radiation, and Infiltration

Building Heat Losses

We need to have a short primer in thermodynamics (don’t worry, this will be relatively simple). First, we have to discuss units of heat. In the English system used by the US, a British Thermal Unit (BTU) is the amount of heat energy needed to raise the temperature of one pound of water by one degree F. In the SI system (rest of the world), joules and kilowatt-hours are the measure of heat energy (1055 j = 1 BTU and 1 kilowatt-hour = 3412 BTUs).

Next, we look at heat energy used over time. If we burn a bunsen burner for one hour (assuming no heat loss), raising the temperature of 1 pound of water 20 degrees, then the heat energy rate is 20 BTU/hour.

In order to determine how much solar heat input and thermal storage we will need, we must understand the heat losses of the building under design;

Qloss = (Σ(UA)n + Cv)(ti – to)

where:
    Qloss = BTU/hr or kW
    U = 1/R-value (conduction, see R-values of common materials)
    A = area (ft2 or m2)

    n = exterior building surfaces (all walls, windows, ceilings, floors)
    Cv = infiltration losses (see Architect’s Handbook) [1]

    ti = desired indoor temperature
    to = outdoor temperature, normally the coldest in the 97.5 percentile (2.5% of the time is colder)

Building Heat Gains

Now that we know how much heat is being lost by a building, we can determine how much heat we need to collect. From Part 1, we understood how much energy could be received by our aperture. Let’s size our aperture now (with rough calculations) to balance out the losses;

Qgain = (Σ((Qinsolation + Qdiffuse + Qreflected)A)nSHGC + Qother

where:
    Qgain = BTU/day or kWh/day
    Qinsolation = BTU/ft2/day or kWh/m2/day from table in Part 1

    Qdiffuse = (normally a part of the empirical insolation data, more at NREL)
    Qreflected = insolation energy x surface reflectivity (rough estimate, more at NREL)
    n = each window facing the equator (cooling calculations must account for east and west windows)
    SHGC = Solar Heat Gain Coefficient

    Qother = Heat from people and various powered devices inside the insulated shell [2]

So in order for our building to have sufficient heat input, the daily gains must equal the hourly losses over a 24 hour period, on average, centered around the desired temperature. On cloudy days, the deficit is made up by extra thermal mass (see below), backup heating, or increasing layers of thermal underwear. Note that backup heating could be an active solar heating system with a small collector array and a large storage tank that collects and stores heat on sunny days for use on cloudy days.

An important point to note: the higher the R-value and lower the area of the walls and windows, the less energy is lost through them, hence less sunlight (windows) and thermal mass are needed to achieve and maintain the desired temperature range. That’s why superinsulation techniques (e.g., R-50 strawbale walls, minimal thermal-bridging wall components) and space efficiency are commonplace in passive solar design (compared to 6″ R-19 walls or 4″ R-13 walls, for example). Strawbale walls have far lower embodied energy than concrete, so are highly attractive from an EROEI standpoint. Due to its significant breadth, the subject of energy efficient building techniques will be the subject of another article.

Absorber

The absorber in a passive solar implementation is the surface that receives the sunlight (direct or reflected), converting the visible light and infrared spectrum energy into heat. Figure 6 shows the light spectrum energy density that penetrates the atmosphere. Note that the most intense radiation comes from the visible light spectrum between 400 and 700 nm, though substantial amounts are also available in the infrared spectrum (if not substantially blocked by low-E glass).

Solar Irradiance by Wavelength (reaching the Earth)
Figure 6 – Solar Intensity at Sea Level by Wavelength

Hence, an appropriate absorber in a passive solar design will convert as much of this impinging spectral energy into heat as possible. The measure of how well the absorber captures the radiant energy is referred to as the absorptivity. The higher the absorptivity, the less energy is reflected away (see Table 1 for properties of common materials).

Once the sun’s energy is captured by the absorber, it can also be re-radiated in the infrared spectrum to cooler objects; the measure of this re-radiation is called emissivity. For direct gain homes where a thermal storage floor is heated, emissivity is not much of a concern, as the heat is radiated into the room (if the people or objects in the room are cooler). In situations where the absorbing surface faces external surfaces with little insulating value (i.e., windows), the re-radiation loss is a reduction in energy efficiency and should be minimized as much as possible. Some materials or treatments have much higher absorptivity values than emissivity values; these are called selective, and are also used quite frequently in solar thermal collectors for hot water and active solar heating. Many materials have varying values of absorptivity and emissivity depending on the temperature and spectral wavelength, so the values listed are averaged out for the integral of the solar intensity by spectrum shown in figure 6. See this list for more materials.

Table 1 – Absorptivity and Emissivity of Common Materials[3][4]

Material Absorptivity Emissivity
White tile/stone/paint 0.30 – 0.50 0.85 – 0.95
Unfinished concrete 0.65 0.87
Red brick/stone/paint 0.65 – 0.80 0.85 – 0.95
Flat black paint 0.96 0.87
Copper Oxide 0.90 0.17
Black nickel 0.90 0.08
Black chrome-coated
copper foil
0.95 0.11

In Part 3, we’ll cover how to select and size thermal mass in order to even out the swings in outside temperature and internal solar gain. Future articles in the series will be devoted to distribution, controls, renovation, design tools, green building standards, case studies, and more.

References:
1. David Kent Ballast, Architect’s Handbook of Formulas, Tables, and Mathematical Calculations, Prentice Hall, 1988
2. Kissock, J, Internal Heat Gains and Design Heating & Cooling Loads, University of Dayton Lecture
3. Michael J. Crosbie, The Passive Solar Design and Construction Handbook, John Wiley and Sons, 1998

4. John Little, Randall Thomas, Design with Energy: The Conservation and Use of Energy in Buildings, Cambridge University Press, 1984

Will Stewart

Transitioning away from declining petroleum production

While we are not in the habit of frequently firing out news articles, there are three relatively recent articles that strike home the fact that conventional oil production is inescapably on the verge of declining (if it hasn’t already). The first is an admission by none other that the Wall Street Journal:

… listen to warnings about a different crisis that is looming and that could cause massive disruption. A shortage of oil could be a real problem for the world within a fairly short period of time. It was unfortunate for the [Industry Taskforce on Peak Oil and Energy Security] which chose to point this out yesterday that they should have chosen to do so on the day the Organization of Petroleum Exporting Countries, or OPEC, reported that the effects of the financial downturn had led to a slight downgrade in its forecast for oil consumption this year.

According to Philip Dilley, the chairman of Arup, the consulting engineers: “We must plan for a world in which oil prices are likely to be both higher and more volatile and where oil prices have the potential to destabilize economic, political and social activity.”

The next surprisingly candid article (given the source) is about scientists at Kuwait University and the Kuwaiti Oil Company;

Predicting the end of oil has proven tricky and often controversial, but Kuwaiti scientists now say that global oil production will peak in 2014.

Take Mexico as just one example. The nation that has long represented a top oil exporter has experienced plummeting oil production, and might even begin importing oil within the decade.

Already we are seeing signs that $80/bbl oil prices are changing how far goods are moved. From SupplyChainManagement.com;

The move toward “near-sourcing” is underway. Also called “reverse globalization” or “shortening the supply chain,” near-sourcing describes the return of American manufacturing in order to decrease shipping expenses. As freight costs remain high, globalization has become less competitive and is expected to remain so for the foreseeable future.

Historically, cheap gas fueled globalization. It enabled companies from all over the world to shop globally for cost-saving business solutions…fuel has been 15 percent of carrier operating costs. Today, it is estimated to be substantially over 40 percent…the cost of transporting imported goods into the United States is now equivalent to a 9 percent tariff on imports.

So the WSJ has caught on to the timeframe and risks associated with peak oil, and even a major OPEC exporter is telling us it is breathing down our neck. With shortening supply chains, transitioning goods and services back to local sourcing will provide a lot of opportunities for employment and new businesses.

In future articles, we will discuss in greater detail what we can do to plan and implement such a transition.

Will Stewart

Passive Solar Design Overview: Part 1 – The Basics

Passive solar refers to the design and placement of a building to enable solar heating without the need for sensors, actuators, and pumps, in contrast to active solar, which utilizes pumps/blowers, sensors, and logic control units to manage collection, storage, and distribution of heat. The two techniques are not exclusive, however, and can work together effectively.

As solar radiation (insolation) is a diffuse energy source, and not at the beck and call of a thermostat, passive solar design techniques are at their best when combined with other related methods, such as energy efficiency (insulation, weatherization, building envelope minimization), daylighting, passive cooling, microclimate landscaping, and a conservation lifestyle (i.e., temperature settings, raising and lowering of insulated shades, etc). Most of these topics will be covered in other articles, though passive cooling will be addressed in this series, which is intended as an overview, as a complete engineering treatment on passive solar design would require several dozens of articles.

Even though solar insolation is diffuse, and generally weaker the further away from the equator, it can be the basis for the majority of a building’s heat energy input even in high latitude places such as Canada, Norway, Germany, the Northern US (Maine, New Hampshire, Michigan, Wisconsin, Minnesota, North Dakota, Montana, Idaho, Washington state, etc), Scotland, the Netherlands, etc. Even the US Department of Defense has a passive solar design guide. Design approaches such as Passivhaus have achieved up to 90% reduction in energy use over traditional building methods. In areas with reasonably consistent winter insolation, well insulated passive solar buildings with sufficient thermal mass storage can approach 100% of their space heating needs with passive solar. Enhancements can be added to existing buildings, through major or minor renovations, or through simple additions (Part 4 of the series).

History

The Greeks faced severe fuel shortages in fifth century BC, resorting to arranging their houses so that each could make maximum use of the sun’s warming rays. A standard house plan emerged, with Socrates noting, “In houses that look toward the south, the sun penetrates the portico in winter.” The great Greek playwright Aeschylus even proclaimed only primitives and barbarians “lacked knowledge of houses turned to face the winter sun”. The Romans picked up on this technique, and improved it by adding windows of mica or glass to better hold in the heat. They passed laws to protect the solar access rights of owners of solar homes from shading by new buildings. In the Americas, the Pueblo and Anazazi took advantage of solar insolation in their adobe and cave dwellings, respectively.

In the 18th and early 19th centuries, solar greenhouses became popular for those of means to grow exotic tropical plantlife in temperate climes. In the 20th century, German architects such as Hannes Meyer, director of the influential Bauhaus architectural school, urged the use of passive solar design techniques that began to flourish in the 1930s, only to be pushed aside by the Nazis and WWII. Many German architects made their way to the US, and a small solar market developed. Built in 1948, Rosemont elementary school in Tuscon obtained over 80% of its heat via solar means, but in 1958, with cheap energy now available and an extensive addition planned, the school district chose to go with a gas-fired furnace. The 1970s saw more emphasis on renewable energy, and passive solar became a household word, though still only penetrating a very tiny percentage of builders’ visions for the new homes market. More in-depth passive solar history details can be found at the California Solar Center.

The Basics

Location and Orientation

To assess whether passive solar is advantageous to a location, one must first find out the amount of winter sunlight that is available. The simplest way is to find solar insolation data for the site under consideration, ideally collected over a series of decades (noting that a changing climate can mean the data may need to be extrapolated). The data can come in tabular or map form, with the latter providing a quick indicator of the amount of winter insolation in one’s area. Tabular data, however, is more precise, giving one the best information available about trends in their area. A note of caution: the data is usually an average of conditions, and does not necessarily take into consideration unusual weather years or how the climate may change in one’s area of consideration.

Interpretation of Data:

Most of the maps and tabular data measure solar insolation as kWh/m2/day, which is roughly the number of kilowatt hours of energy striking a square meter of surface in a day. This is also referred to as a Sun Hours on some maps, and we will refer to it as such throughout this series. Important note: Since virtually all modern passive solar design focuses on vertical windows, data must be specified or converted to a vertical orientation. Some of the data currently available is for collectors tilted at an angle equal to the site’s latitude (L) or a horizontal surface (H), which would need to be converted to a vertical surface (V). The table below contains a partial list of solar maps and data, though make sure any source you use focuses on winter data, as other maps/data are used for year around solar photovoltaic projections.

Region Maps Data
World Solarex (L)
FirstLook (Americas only currently)(H)
WRDC (select Global)
Canada Solarex (L) WRDC (select Global)
Europe Satellite data map (H) WRDC (select Global)
US National Renewable Energy Labs
(select Vertical surface) (V)
Many US cities (H)
Detailed data (H) (manual)
– Other sources
Australia Aus BOM June Map (L) Aus BOM site data (L)

The orientation of the building will determine how much solar insolation is captured during the desired period of the day. For example, a passive solar house facing the equator will receive an equal amount of solar heat before and after noon. The more a building is oriented away from true south (or north in the southern hemisphere) the less winter solar insolation it will be able to capture, and it becomes more susceptible to undesirable summer solar energy that is harder to shade with a properly sized overhang.

In addition to direct solar insolation beaming from the sun, there is also diffuse radiation from the sky, and reflected radiation from the ground.

Figure 1 – Types of solar input

Design Aspects:

Passive solar building design revolves around 5 main aspects;

Aperature: The set of windows and overhangs that determine how much sun enters the building.
Absorber: The material that the sun’s ray come into contact with.

Thermal Mass: The material that stores the sun’s thermal energy for re-release after sundown.

Distribution: The means by which the thermal energy is released to the living/working spaces.
Control: The techniques used to control the collection and distribution of the sun’s thermal energy.

These aspects can be configured by the designer/architect into roughly three main design themes (with endless variations);

  1. Direct Gain: Sunlight shines into and warms the living space.
  2. Indirect Gain: Sunlight warms thermal storage, which then warms the living space.
  3. Isolated Gain: Sunlight warms another room (sunroom) and convection brings the warmed air into the living space.



Figure 2 – Direct solar gain


Figure 3 – Indirect solar gain


Figure 4 – Isolated Gain

In Part I of this series, we will cover the Aperature;

Aperture:

The first step in passive solar design is determining how to collect the sun’s energy. In most climates that passive solar is employed, this means windows of one form or another. An important metric of a window is its Solar Heat Gain Coefficient (SHGC) that measures how much of the sun’s energy passes through the window without reflection or absorbtion and re-radiation. The higher the SHGC, the more solar energy a window will allow through. A plain single pane window normally has a SGHC of 0.86 while a plain dual pane window is about 0.72.

In order to reduce heat loss in cool and cold climates, windows are normally at least dual pane, if not triple pane. The dead air space between window panes helps to increase the insulation factor, call the R-value (or its inverse, the U-value). One single pane of ordinary glass has an approximate R-Value of 0.85. A dual pane window with 3/8 inch of air space typically has an R-value of 2.1. The substitution of less viscous gases such as argon and krypton allow greater distances between panes before the gas begins to convect (tranferring heat at a much higher rate), increasing their insulating effect. Each pane added, however, blocks/absorbs/reflects more solar energy, which effectively reduces the window unit’s SHGC. Additionally, low-E coatings that help to reduce the amount of infrared heat radiated out of a room through a window also reduces SHGC (amount dependent on the type of low-E coating). So a balance must be struck by the designer/architect between the amount of energy received during winter sunlight hours vs. the amount of energy lost 24 hours a day. There are windows available that have been designed for passive solar applications to provide sufficient SGHC while still providing adequate insulation (e.g., one such window has a SHGC = 0.56 and an R-value = 5).

There is ongoing research to bring aerogel windows to commercial production, as these windows provide extremely high R-values (approximately R-10 per inch) while having SHGCs of .52 or greater.

The orientation of the window is just as important; windows facing the equator receive the greatest amount of sunlight. And this orientation also greatly reduces unwanted solar collection during the warmer days of the year, as windows facing East and West are difficult to shade effectively with simple overhangs, requiring larger and/or view blocking awnings.

Discuss this topic at the Sustainable Loudoun Forum under Energy: Passive Solar

Will Stewart

References:

California Solar Center: Passive Solar History
US DoE Passive Solar Home Design

So you’ve been considering vegetable gardening…where to start?

Spring is around the corner, with its burst of warmth, renewal, and an irresistible call to come outdoors. Why should we bother heading out to our yards with shovels and seeds?

There are many good reasons;

  1. Fresh air and sunshine: (there shouldn’t be any need to explain this one!)
  2. Cost savings: While home values, goods, and services are deflating, food and energy costs are rising. Even “eating at home” costs were up 0.4% this January alone. Economic conditions may take a long time to change, and it is very possible that we will not return to continuous economic growth.
  3. Supply chains are shortening: Also called “relocalization” or “reverse globalization”, ‘near-sourcing’ describes the need to return to American-made goods and services in order to decrease shipping expenses. As freight costs remain high (and oil prices are only going to trend upward), globalization has become less competitive and is expected to remain so for the foreseeable future.
  4. Pesticides: You decide which (if any) pesticides are used on your food, not some distant industrial agri-corp. There are natural pesticides and/or methods that rely on the encouragement of natural pest predators (e.g., Ladybugs eat aphids, tiny Brachonid wasps lay eggs in Hornworms), and companion plantings that discourage pests (e.g., basil next to tomatoes, marigolds near a variety of vegetable, etc).
  5. Genetically modified: You decide which plants to grow, not Monsanto or ADM.
  6. Self determination: You are less reliant on big chain supermarkets, and can choose to grow the items you want that are not readily available at Loudoun’s farmers markets and CSAs.

Where to Start?

Planning a garden is an important first step that should be given high priority. It is far easier to learn from the many mistakes of others than it is to reinvent the wheel, so in this article, we will focus on planning.

Think big, start small – think about where you want to be in 3 years, 5 years, etc, but start with a small plot the first year, to get your feet wet and learn what works best for you. Layout a design that you can grow into, so that you can expand your garden according to a vision. For those who really want to consider what their whole yard might look like when also considering herbs, fruit vines/shrubs/trees, and nut trees, see this informative introduction to permaculture by the National Sustainable Agriculture Information Service. Permaculture is a broad topic, and will be discussed in more detail in future articles.

Some resources that I value highly include;

  • Loudoun County Master Gardeners: There is a wealth of information on their website, at clinics they hold, and terrific examples at their demo garden. One clinic is coming up on March 20-21.
  • Friends/Neighbors who garden: Always a great source of information about what works well in the area, especially in regard to pests and soils. It can be helpful to coordinate some pest control measures, such as Japanese Beetle traps.
  • Books:
  • There are many excellent books, more than we can reliably list here (especially those from the Rodale Institute), but let me share a few that have proved valuable to me and others;

    • The Garden Primer – Barbara Damrosch: Covers garden basics and the main garden crops with superb details on each. I consider this a must-have for every beginner.
    • Great Garden Companions – Sally Jean Cunningham: Covers planning, garden design, raised beds, companion plantings, plant family rotation, natural pest control. Highly valuable for beginners and intermediates alike, another must-have.
    • The New Organic Grower – Elliot Colemen: Covers planning, garden design, raised beds, pest control
    • Four Season Harvest – Elliot Coleman: Shows how to plan a garden to take advantage of lightweight, movable greenhouses to enable productive gardens Spring, Summer, Winter, and Fall.
    • Square Foot Gardening: I’ve not tried this but some people report positive results.

  • Weblinks:

Major Planning Steps

  • Where to plant: Select a garden site that has plenty of sunlight, is a place you will want to spend a little time, and has reasonable access to water (think about where you will run your hose or bury a warm season water line to take water directly to your garden).

  • Preparation: For those who do not already have a garden, a new garden site is often part of the yard, currently covered with grass. There are a number of ways of preparing you soil, and most of them start with how to kill the grass. Simply turning it over doesn’t always kill the grass down to the roots. Some approaches;

    • Lay down a layer of newspapers covered with mulch/compost: This is a simple way to start. The grass roots decompose after dying, providing in-ground compost biomass that acts as a fertilizer.
    • Lay down a layer of black plastic: Also reasonably simple, and kills the grass more quickly due to the heat trapped at ground level by the black plastic. Some approaches call for punching small holes through with you would plant.
    • Tilling: They range between large gas tillers and small electric tillers (I use the latter from time to time). Some approaches don’t even use tillers at all. Most rental companies carry them.
    • Shoveling: This approach provides the greatest opportunity for exercise. The doubledigging approach is the most shovel-reliant.
  • Soils: What is the fertility and alkalinity of your soil? How deep is it? Don’t worry, soils in this county can be easily amended to produce bountiful yields. Know a neighbor who is looking to get rid of horse or livestock manure? Properly composted, it can yield tremendous benefits to your garden’s productivity.

  • Seed sources: It’s simpler to buy some of your plant already started, though some people like to start from the beginning with seeds. Some of the obvious names are Park Seeds, though they rely fairly heavily on hybrid varieties. There are now many seed suppliers that distribute open-pollinated seeds, that allow gardeners to save their seeds and have a reliable crop the following year. Attempting to save and replant hybrid seeds is almost always disappointing, as the genetics of the progeny are unpredictable and many hybrids are often either sterile or bear no fruit. Seed sources include (and more are here);

    • Southern Exposure Seed Exchange P.O. Box 460, Mineral, VA 23117 gardens@southernexposure.com
      Phone 540-894-9480 They carry a wide variety of open-pollinated and heirloom vegetables, and my favorite (the most local, too)
    • Bountiful Gardens 18001 Shafer Ranch Road, Willits, CA 95490-9626 707-459-6410
      All of seeds are open-pollinated and untreated.
    • Johnny’s Selected Seeds Foss Hill Rd. Albion, ME 04910 (207) 437-9294
      Wide variety of hybrid, heirloom and open-pollinated seeds.
    • Seed Savers Exchange 3076 N. Winn Rd. Decorah, IA 52101 (319) 382-5990 Non-profit dedicated to preserving seed diversity. Heirloom and open-pollinated seeds.
    • Seeds of Change 621 Old Santa Fe Trail #10 Santa Fe, NM 87501 (505) 438 8080
      Source of heirloom and open-pollinated seeds.
    • Start your seeds with an indoor micro greenhouse.

  • Tools: Ever buy a tool that way cheaply made and didn’t last anywhere near as long as you expected? I can’t count the number of times this has happened to me with Chinese-made tools purchased at the big hardware stores. So I started buying mine from Lehman’s in Pennsylvania, and have been very pleased with the workmanship and sturdiness of these tools. I’d rather pay twice as much any day for a tool that lasts 5 times as long. So of the tools you will need (others will be particular to the gardening style you select);

    • Shovel: no surprise here. A trenching shovel may also come in handy.
    • Stirrup hoe: A fascinating improvement to the old fashioned hoe. Now there is no more lifting and hacking, just moving the hoe back and forth over weeds to shear them away.
    • Hand cultivator: This one is multi-purpose.

All this may seem like a lot, but when you start small with a vision of what it will become in the future, it becomes quite manageable and enjoyable. Make the leap! We look forward to hearing your questions and plans at the Sustainable Loudoun Forum under the Garden Planning section.

Will Stewart

Loudoun gas power plant application up for review Monday

There is a gas power plant application coming up for Board of Supervisors review this Monday, with public input starting at 5PM. So that you have an understanding of this application, here are some key points;

1. (From the PEC) The location of this industrial use conflicts with the County’s Comprehensive Plan. The Comp Plan currently calls for low density residential development (1 unit per 10 acres) at the location being considered, not an industrial power plant. This was a deliberate decision to protect cultural and natural resources in the immediate area, including the Goose Creek Reservoir which provides the surrounding region with drinking water. The public emphatically reaffirmed the vision for the entire Transition Area and this area in particular in a series of public hearings between 2005 and 2008.

2. The plant is primarily full-time (baseload) electricity generation (approx. 600 MW), with some peak-load (approx. 300 MW). The solar component is only 1 MW capacity (when the sun is directly overhead), so that is roughly equal to .25MW actual generation. Hence, the energy generated will be roughly 99.98% from fossil fuels. So the “hybrid” moniker is misleading at best.

3. Virginia imports almost 60% of it’s natural gas, producing 128,454 million cubic feet and consuming 300,000 mcf. So any claim of energy independence is unfounded.
http://www.eia.doe.gov/pub/oil_gas/natural_gas/data_publications/natural_gas_annual/current/pdf/table_073.pdf

4. The US peaked in natural gas production in 1973. While various current estimates purport large potential resources of shale-bound natural gas in the US, like other energy investment schemes, there has been a significant overestimation intended to bolster the stock value of the main holding company.

From The Oil Drum article “ExxonMobil’s Acquisition of XTO Energy: The Fallacy of the Manufacturing Model in Shale Plays“;

The manufacturing model developed in the Barnett Shale play (Fort Worth basin, Texas), where almost 14,000 wells have been drilled. The greatest number of commercially successful wells are located in two core areas or “sweet spots,” and results are not uniform or repeatable even within these core areas .

The overriding problem with most U.S. shale plays is the lack of any elements of natural reservoir rock. Shale typically has no effective (connected) porosity, and have permeabilities that are hundreds to thousands of times less than the lowest permeability tight sandstone reservoirs. Unless siltstone or sandstone interbeds are present within the shale that have better matrix porosity and permeability, all reservoir is artificial–it must be created by engineering brute force.

Much progress has been made with completion methods, but unless stimulation produces an extensive, micro-fractured rock face, long-term production at commercial volumes is unlikely.

The mainstream belief that shale plays have ensured North America an abundant supply of inexpensive natural gas is not supported by facts or results to date. The supply is real but it will come at higher cost and greater risk than is commonly assumed.

5. Any power generated by this plant will not reduce coal-fired power generation, as that production is cheaper and will continue to be utilized until caps on carbon emissions are instituted. Wind or solar from potential local or nearby sources would be less likely to be implemented, however.

6. Natural gas combustion in a 60% efficient combined cycle generator (like one of the ones proposed) produces about 40% of the carbon emissions that a typical coal plant produces.

7. Green Energy Partners has claimed 90% efficiency with their main combined cycle, though this presumes they would export the excess heat to nearby buildings. There are no buildings nearby that can readily take advantage of this potentially wasted heat (the Wegman’s complex over 2 miles away already has new, efficient HVAC systems that would hardly be ripped out and replaced).

8. One way “peaker” gas power plants can complement wind and solar power generation is through the ability to ramp up and down quickly as conditions vary. The current power plant configuration is predominantly baseload generation, however, with 1/3 of it’s generation as ‘peaker’.

The public hearing starts at at 5:00 PM at the first floor Board Room of the County Government Center. If you can’t attend, please send an email to the Board today.

Add to the discussion at the Sustainable Loudoun forum under Natural Gas Power Generation.

« Older entries Newer entries »