Hledik, R. 2009. How Green Is the Smart Grid? The Electricity Journal 22, no. 3: 29-41.
"A simulation of the US power system suggests that both conservative and more technologically aggressive implementations of a smart grid would produce a significant reduction in power sector carbon emissions at the national level. A conservative approach could reduce annual CO2 emissions by 5 percent by 2030, while the more aggressive approach could lead to a reduction of nearly 16 percent by 2030" (29).
Two scenarios are examined, as the concept of "smart grid" has not been fully concretized. Scenario one explores technologies that are currently available. The second explores an expansion of possible future technologies.
"At a basic level, the smart grid will serve as the information technology backbone that enables widespread penetration of new technologies that today's electrical grid cannot support. These new technologies include cutting-edge advancements in metering, transmission, distribution, and electricity storage technology, as well as providing new information and flexibility to both consumers and providers of electricity. Ultimately, access to this information will improve the products and services that are offered to consumers, leading to more efficient consumption and provision of electricity" (30).
A key aspect of smart grid technology is advanced metering infrastructure (AMI). This leads to dynamic pricing of energy.
There are a variety of pushes to produce more energy from renewable sources. These each have different goals. Doubling the Renewable Portfolio Standards (RPS) would lead to 19% of energy being produced by renewables in 2030. T. Boone Pickens calls for 20% from renewable by 2020, as does the EU. The New Apollo program wants 25% by 2025. Google's energy plan calls for 60% by 2030. Repower America suggests that 75% come from renewables in the next 10 years.
The Brattle Group's RECAP model is used to explore these scenarios using EIA data and assumptions.
"Among the key outputs of RECAP is a forecast of the CO2 emissions from both existing and new power plants. This forecast depends on both the projected mix of new plants that will be added to the system, and the operation of all power plants that are connected to the grid. Implementation of a smart grid will influence both of these. In this study, there are four specific smart grid impacts that have been modeled" (37). These are: peak demand reduction, conservation, increased penetration of renewables and reduced line loss.
Three scenarios are explored out to 2030. Firstly, a business as usual forecast, which shows average annual increases in CO2 emissions of 0.7%. Secondly, the Conservative Scenario, which posits the effects of existing ICT technology on CO2 shows an average annual growth of 0.5%. Finally, the Expanded Scenario shows an average annual growth rate of -0.1%.
Overall, dynamic pricing leads to a nominal improvement in overall CO2 emissions. The combination of dynamic pricing and information displays leads to a 5% reduction in annual CO2 emissions. "By far, the single largest reduction comes from the cleaner mix of generating capacity that is enabled by distributed resources and an expanded transmission system, amounting to a 9.9% reduction in CO2 emissions" (39).
Friday, July 31, 2009
Wednesday, July 15, 2009
Gerner and Schrodt: The Effects of Media Coverage on Crisis Assessment
Gerner, DJ, and PA Schrodt. 1998. The effects of media coverage on crisis assessment and early warning in the Middle East. Early Warning and Early Response.
Media coverage is not uniform, and many point to phenomena such as "media fatigue" to highlight the unevenness of reporting. This piece explores these questions by specifically looking at the Arab-Israeli conflict. It explores media fatigue, finding that it is in fact measurable.
Media fatigue occurs when conflict events are not as heavily covered in regions where conflict events are more likely to occur; as conflicts continue for long periods of time, the media does not report as frequently as when conflicts occur on a more stochastic time horizon. This piece argues that large scale news sources are useful, but that they are more prone to media fatigue. For example, instead of just relying on The NYT for event coding, researches must also look to more specific news sources that deal more clearly with the conflict at hand.
Additionally, media fatigue is not entirely a product of boredom, but also may reflect a kind of competition between different events. For example, the article argues that coverage of the Israeli-Arab conflict was overwhelmed by the collapse of the Soviet Union in 1989.
In the end, however, exploring media fatigue is quite difficult, as it requires counterfactuals.
Media coverage is not uniform, and many point to phenomena such as "media fatigue" to highlight the unevenness of reporting. This piece explores these questions by specifically looking at the Arab-Israeli conflict. It explores media fatigue, finding that it is in fact measurable.
Media fatigue occurs when conflict events are not as heavily covered in regions where conflict events are more likely to occur; as conflicts continue for long periods of time, the media does not report as frequently as when conflicts occur on a more stochastic time horizon. This piece argues that large scale news sources are useful, but that they are more prone to media fatigue. For example, instead of just relying on The NYT for event coding, researches must also look to more specific news sources that deal more clearly with the conflict at hand.
Additionally, media fatigue is not entirely a product of boredom, but also may reflect a kind of competition between different events. For example, the article argues that coverage of the Israeli-Arab conflict was overwhelmed by the collapse of the Soviet Union in 1989.
In the end, however, exploring media fatigue is quite difficult, as it requires counterfactuals.
Labels:
Event Data,
Media Fatigue
Schrodt: Event Data in Foreign Policy Analysis
Schrodt, PA. 1994. Event data in foreign policy analysis. Foreign Policy Analysis: Continuity and Change. Prentice-Hall: 145-166.
"Event data are a formal method of measuring the phenomena that contribute to foreign policy perceptions. Event data are generated by examining thousands of newspaper reports on the day to day interactions of nation-states and assigning each reported interaction a numerical score or categorical code. For example, if two countries sign a trade agreement, that interaction might be assigned a numerical score of +5, whereas if the two countries broke off diplomatic relations, that would be assigned a numerical score of -8. When these reports are averaged over time, they provide a rough indication of the level of cooperation and conflict between the two states" (2).
Creating event data involves three distinct steps: 1. identify sources; 2. develop a coding system; 3. train human coders.
"Event data was originally developed by Charles McClelland in the early 1960s as a bridge between the traditional approach of diplomatic history and the new quantitative analysis of international politics advocated in the behavioral approach. McClelland reasoned that history could be decomposed into a sequence of discrete events such as consultations, threats, promises, acts of violence and so forth. Event data formed the link between the then-prevalent general systems theories of international behavior and the textual histories which provided an empirical basis for understanding that behavior" (7).
"Event data are a formal method of measuring the phenomena that contribute to foreign policy perceptions. Event data are generated by examining thousands of newspaper reports on the day to day interactions of nation-states and assigning each reported interaction a numerical score or categorical code. For example, if two countries sign a trade agreement, that interaction might be assigned a numerical score of +5, whereas if the two countries broke off diplomatic relations, that would be assigned a numerical score of -8. When these reports are averaged over time, they provide a rough indication of the level of cooperation and conflict between the two states" (2).
Creating event data involves three distinct steps: 1. identify sources; 2. develop a coding system; 3. train human coders.
"Event data was originally developed by Charles McClelland in the early 1960s as a bridge between the traditional approach of diplomatic history and the new quantitative analysis of international politics advocated in the behavioral approach. McClelland reasoned that history could be decomposed into a sequence of discrete events such as consultations, threats, promises, acts of violence and so forth. Event data formed the link between the then-prevalent general systems theories of international behavior and the textual histories which provided an empirical basis for understanding that behavior" (7).
Labels:
Event Data
Monday, July 13, 2009
O'Brien: Anticipating the Good, the Bad adn the Ugly
Sean O'Brien, “Anticipating the Good, the Bad, and the Ugly: An Early Warning Approach to Conflict and Instability Analysis,” Journal of Conflict Resolution 46, no. 6 (2002): 791.
This article explores structural variables as a cause for internal conflict and instability. It uses a statistical model based on fuzzy logic to explore historic cases of instability. The results claim to be accurate for 80% of cases out 5 years.
There is a general review of the history of early-warning systems for instability, and a nice listing of citations. State failure is operationalized in the same way as it was in the SFTF: through genocide/policide; ethnic wars; revolutionary wars; and disruptive regime changes. There is also mention of the most basic SFTF model’s three independent variables: infant mortality, trade openness and democracy. King and Zeng’s (2001) model is highlighted as improving on the SFTF by adding legislative effectiveness and the fraction of the population in the military, as well as correcting other general methodological problems.
“This study seeks to extend this line of work in several ways. First, we are interested in forecasting the likelyhood of country instability or, more precisely, the conditions conducive to instability for every major country of the world over each of the next 15 years. To do so, we identify, evaluate, and ultimately forecast those macrostructural factors at the nation-state level that, when combined with events or triggers such as assassinations, riots, or national disasters, have historically…been associated with different kinds and levels of intensity of conflict” (4).
The study uses KOSIMO data for the dependent variable.
The model is created, the data is validated, the results explored, and limitations are acknowledged.
This article explores structural variables as a cause for internal conflict and instability. It uses a statistical model based on fuzzy logic to explore historic cases of instability. The results claim to be accurate for 80% of cases out 5 years.
There is a general review of the history of early-warning systems for instability, and a nice listing of citations. State failure is operationalized in the same way as it was in the SFTF: through genocide/policide; ethnic wars; revolutionary wars; and disruptive regime changes. There is also mention of the most basic SFTF model’s three independent variables: infant mortality, trade openness and democracy. King and Zeng’s (2001) model is highlighted as improving on the SFTF by adding legislative effectiveness and the fraction of the population in the military, as well as correcting other general methodological problems.
“This study seeks to extend this line of work in several ways. First, we are interested in forecasting the likelyhood of country instability or, more precisely, the conditions conducive to instability for every major country of the world over each of the next 15 years. To do so, we identify, evaluate, and ultimately forecast those macrostructural factors at the nation-state level that, when combined with events or triggers such as assassinations, riots, or national disasters, have historically…been associated with different kinds and levels of intensity of conflict” (4).
The study uses KOSIMO data for the dependent variable.
The model is created, the data is validated, the results explored, and limitations are acknowledged.
Labels:
State Stability
Beck and King: Improving Quantitative Studies of International Conflict
Nathaniel Beck and Gary King, “Improving quantitative studies of international conflict: A conjecture,” American Political Science Review 94, no. 1 (2000): 21.
“We address a well-known but infrequently discussed problem in the quantitative study of international conflict: despite immense data collections, prestigious journals, and sophisticated analyses, empirical findings in the literature on international conflict are often unsatisfying. Many statistical results change from article to article and specification to specification. Accurate forecasts are nonexistent. In this article we offer a conjecture about one source of this problem: The causes of conflict , theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but they are large, stable, and replicable wherever the ex ante probability of conflict is large” (21).
“In short, we conjecture that many quantitative international conflict studies lack robustness because they look not only for the effects of variables averaged over all dyads, whereas in reality the effects vary enormously over dyads and are only substantively large for those already at relatively high risk of conflict” (22).
“According to our idea, international conflict data differ from other rare events data sets in two ways. The effect of any single explanatory variable changes markedly as a function of changes in the other explanatory variables…and the dependent variables are, in principle, powerful enough to predict whether conflict occurs if the appropriate model is used” (23).
“…neural networks are sometimes treated as a black box for classifyfing very complex data patterns in the absence of theory…In contrast, we hypothesize hat for international conflict data there are massive nonlinear interactive effects, and only the confluence of many causal factors leads to a nontrivial increase in the probability of war. This allows us to interpret the output of the model in a way that is useful for the international relations scholar, not simply as a black box that does a good job of classifying which observations are more or less likely to be conflictual” (27).
“We address a well-known but infrequently discussed problem in the quantitative study of international conflict: despite immense data collections, prestigious journals, and sophisticated analyses, empirical findings in the literature on international conflict are often unsatisfying. Many statistical results change from article to article and specification to specification. Accurate forecasts are nonexistent. In this article we offer a conjecture about one source of this problem: The causes of conflict , theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but they are large, stable, and replicable wherever the ex ante probability of conflict is large” (21).
“In short, we conjecture that many quantitative international conflict studies lack robustness because they look not only for the effects of variables averaged over all dyads, whereas in reality the effects vary enormously over dyads and are only substantively large for those already at relatively high risk of conflict” (22).
“According to our idea, international conflict data differ from other rare events data sets in two ways. The effect of any single explanatory variable changes markedly as a function of changes in the other explanatory variables…and the dependent variables are, in principle, powerful enough to predict whether conflict occurs if the appropriate model is used” (23).
“…neural networks are sometimes treated as a black box for classifyfing very complex data patterns in the absence of theory…In contrast, we hypothesize hat for international conflict data there are massive nonlinear interactive effects, and only the confluence of many causal factors leads to a nontrivial increase in the probability of war. This allows us to interpret the output of the model in a way that is useful for the international relations scholar, not simply as a black box that does a good job of classifying which observations are more or less likely to be conflictual” (27).
Labels:
State Stability
Saturday, July 11, 2009
Rost and Schneider: A Global Risk Assessment Model for Civil War Onsets
N Rost and G Schneider, “A Global Risk Assessment Model for Civil War Onsets.”
The authors create a model for the risk that different countries face of civil war over the next 5 years using multivariate logit models. Their goal is to establish an effective early warning system that will allow IOs to more effective allocate scarce resources to mitigate potential conflict eruptions.
Early warning systems typically use two types of date: “events data” or “standards-based data”. Events data models typically use news feeds gathered and look at conflict in a specific region and can predict the onset of conflict in a very short period of time. Standards-data use data that is gathered over a much longer period of time and has an annual track-record upon which to build.
The authors argue that, while state strength has been highlighted as a key determinant of the stability of states, one factor that is not added to the equation is the state’s enforcement of basic human rights. While state strength may lead to instability, if a state is actively discriminating against a population, it is expected to potentially exacerbate the problem.
The authors go through a variety of explanations for the onset of civil war, from political to economic to demographic.
They then construct their model. They find that their human rights variable is highly statistically significant with the onset of civil conflict, among a variety of other important variables: civil war decreases with increasing economic development; oil exporters are at higher risk; political instability and mountains do not correlate; population size has no significance; democracy correlates with civil war; military regimes experience more civil war.
The authors create a model for the risk that different countries face of civil war over the next 5 years using multivariate logit models. Their goal is to establish an effective early warning system that will allow IOs to more effective allocate scarce resources to mitigate potential conflict eruptions.
Early warning systems typically use two types of date: “events data” or “standards-based data”. Events data models typically use news feeds gathered and look at conflict in a specific region and can predict the onset of conflict in a very short period of time. Standards-data use data that is gathered over a much longer period of time and has an annual track-record upon which to build.
The authors argue that, while state strength has been highlighted as a key determinant of the stability of states, one factor that is not added to the equation is the state’s enforcement of basic human rights. While state strength may lead to instability, if a state is actively discriminating against a population, it is expected to potentially exacerbate the problem.
The authors go through a variety of explanations for the onset of civil war, from political to economic to demographic.
They then construct their model. They find that their human rights variable is highly statistically significant with the onset of civil conflict, among a variety of other important variables: civil war decreases with increasing economic development; oil exporters are at higher risk; political instability and mountains do not correlate; population size has no significance; democracy correlates with civil war; military regimes experience more civil war.
Labels:
State Stability
Goldstone et al: A Global Forecasting Model of Political Instability
JA Goldstone and PIT Force, A global forecasting model of political instability (Political Instability Task Force, 2005).
These authors develop a model that is able to predict state failure out 2 years with 80% accuracy. This does not mean that they predict state failure with 80% accuracy, but whether a state will or will not fail with 80% accuracy. This is a simple model, with a certain kind of regime type being understood as the most highly correlated with failure.
Firstly, there was the operationalization of state failure. For these authors, it is defined fourfold: revolutionary wars (over 1,000 battle deaths in one year), ethnic wars (same), adverse regime changes (drop of 6 points in polity score) or genocides/policies (direct targeting of political parties or ethnic groups by governments w/o death numbers specified). This list was compiled with the help of regional experts. Their n values for each: 62 revolutions, 74 ethnic wars, 111 adverse regime changes, 40 genocide/policies from 55-03.
Their list of independent variables was also compiled with the help of area experts and totaled 75.
The broader goals of the political instability task force are to both understand the variables that cause instability, and to create a model that can identify countries that are likely to be unstable.
As failure is an incredibly discrete event, the method used was a case-control method, which is used for identifying rare diseases in large populations. A positive dependent variable is coupled with a negative dependent variable and the populations are compared. Independent variables were also shifted two years prior to the onset year for dependent variables.
The findings of the group were that relatively simple models can be used to identify cases of instability. Even though there have been myriad explanations for the causes of state failure, these are not the case across the board. Goldstone (2001) argues that, “…the origins of a political crisis can best be understood by turning the problem on its head, asking what factors are necessary for a state to sustain stability despite the various problems-economic, political, social- it might encounter” (8).
The authors found that many variables that are traditionally highlighted as being drivers of state instability miss the mark, as they are not able to point to the underlying instability in the political system, but end up exploring effects of political instability. High inflation, high youth bulge and birth rates are indicative of poor governance: “…in countries that are poorly governed, it is more likely that there will arise bouts of high inflation, or sharp economic reversals, or that people will rely more on family support…” (10). State failure is not understood by looking at the list of challenges that a state must face, but rather the resilience of the state to face those challenges.
Two polity variables that lead to very good results in combination was the degree to which a political process was fractionalized, and the degree to which there was open competition for the political office. “The combination of a winner-take-all, parochial approach to politics with opportunities to compete for control of central state authority represents a powder keg for political crisis” (12).
“According to our research, most economic, demographic, geographic, and political variables do not have consistent and staticially significant effects on the risk of instability onset” (14).
“The model essentially has only four independent variables: regime type, infant mortality, a ‘bad neighborhood’ indicator flagging cases with four or more bordering states embroiled in armed civil or ethnic conflict, and the presence or absence of state-led discrimination” (15).
These authors develop a model that is able to predict state failure out 2 years with 80% accuracy. This does not mean that they predict state failure with 80% accuracy, but whether a state will or will not fail with 80% accuracy. This is a simple model, with a certain kind of regime type being understood as the most highly correlated with failure.
Firstly, there was the operationalization of state failure. For these authors, it is defined fourfold: revolutionary wars (over 1,000 battle deaths in one year), ethnic wars (same), adverse regime changes (drop of 6 points in polity score) or genocides/policies (direct targeting of political parties or ethnic groups by governments w/o death numbers specified). This list was compiled with the help of regional experts. Their n values for each: 62 revolutions, 74 ethnic wars, 111 adverse regime changes, 40 genocide/policies from 55-03.
Their list of independent variables was also compiled with the help of area experts and totaled 75.
The broader goals of the political instability task force are to both understand the variables that cause instability, and to create a model that can identify countries that are likely to be unstable.
As failure is an incredibly discrete event, the method used was a case-control method, which is used for identifying rare diseases in large populations. A positive dependent variable is coupled with a negative dependent variable and the populations are compared. Independent variables were also shifted two years prior to the onset year for dependent variables.
The findings of the group were that relatively simple models can be used to identify cases of instability. Even though there have been myriad explanations for the causes of state failure, these are not the case across the board. Goldstone (2001) argues that, “…the origins of a political crisis can best be understood by turning the problem on its head, asking what factors are necessary for a state to sustain stability despite the various problems-economic, political, social- it might encounter” (8).
The authors found that many variables that are traditionally highlighted as being drivers of state instability miss the mark, as they are not able to point to the underlying instability in the political system, but end up exploring effects of political instability. High inflation, high youth bulge and birth rates are indicative of poor governance: “…in countries that are poorly governed, it is more likely that there will arise bouts of high inflation, or sharp economic reversals, or that people will rely more on family support…” (10). State failure is not understood by looking at the list of challenges that a state must face, but rather the resilience of the state to face those challenges.
Two polity variables that lead to very good results in combination was the degree to which a political process was fractionalized, and the degree to which there was open competition for the political office. “The combination of a winner-take-all, parochial approach to politics with opportunities to compete for control of central state authority represents a powder keg for political crisis” (12).
“According to our research, most economic, demographic, geographic, and political variables do not have consistent and staticially significant effects on the risk of instability onset” (14).
“The model essentially has only four independent variables: regime type, infant mortality, a ‘bad neighborhood’ indicator flagging cases with four or more bordering states embroiled in armed civil or ethnic conflict, and the presence or absence of state-led discrimination” (15).
Labels:
State Stability
Subscribe to:
Posts (Atom)