Will We Still Call It War?

26 February 2015

Will We Still Call It War?


When Army Chief of Staff General Ray Odierno says that today’s environment is the most uncertain in his 40 years in the army, it’s easy to see why. Wars are now less about land than ideology. Robots can kill.  A cold war with one enemy has given way to a world with myriad, inter-connected conflicts with no one the U.S. can call ally or enemy. Global warming has shifted the very nature of the environment upon which wars are fought.

Our increasingly complex conflict environment is part of what’s driving the contentious debate over the President’s proposed authorization for the use of military force (AUMF) against ISIS. How do we define our enemy, and the theatres of conflict, in a war that is metastasizing and changing everyday? As Congress reviews the proposed authorization, it’s hard not to compare the present to the past – and to wonder about what the future holds.  At New America’s Future of War Conference this week,  Odierno’s lament helped frame the conversation: if so much has changed in his 40 years of service, what can we expect in the next 40 years?

First, there’s the spread of new technologies – like the proliferation of drones, combined with America’s deteriorating influence in the fields of drone technology and robotics. According to New America’s new World of Drones project, 85 countries have some form of militarized drone, three countries have used drones in combat, and more have considered it.

As other countries and even companies surpass or challenge the United States in the development of key technologies, the American capability to manage crises may decline.

Dr. Missy Cummings, an associate professor at Duke and former Navy pilot, said the United States military has “lost the edge” in the field. Today, the Israelis lead the world in drone development, Amazon and Google lead the world in robotics, and her students can 3D print a drone in a weekend, she said. Cummings even “guaranteed” that U.S. forces would be struck by a 3D printed drone in the future. As other countries and even companies surpass or challenge the United States in the development of key technologies, the American capability to manage crises may decline.

Another reason for increasing uncertainty, according to Sharon Burke, the Former Assistant Secretary of Defense for Operational Energy Plans and Programs and International Security Program Fellow, flows from changes in the tectonic plates of conflict – such as climate change and resource scarcity. Vice Chair of Naval Operations Admiral Michelle Howard warned of the threats posed by climate change, saying it will be “a challenge for every nation” and reminding the audience that most of the world’s population lives along the coasts.

“It’s a holistic mess,” said Nadya Bliss, Director of the Global Security Initiative at Arizona State University.

Harold Koh, a former legal adviser at the State Department, warned that if Congress doesn’t pass an ISIS-specific authorization…it will be known for passing a “21st Century Gulf of Tonkin resolution.”

As are the laws that govern these new types of conflict – or lack thereof. Already the United States has participated in a six-month war against ISIS without congressional authorization. Harold Koh, a former legal adviser at the State Department, warned that if Congress doesn’t pass an ISIS-specific authorization for the use of military force that supersedes the 2001 AUMF and includes sunset provisions, it will be known for passing a “21st Century Gulf of Tonkin resolution.”

Yet, America’s long war is only the tip of this legal iceberg. Rosa Brooks, a senior fellow at New America and professor at Georgetown Law, contended that the line between war and not war is blurring in part because of advancing technologies and tactics. If we get to a point where we can no longer tell the difference, that could fundamentally challenge the law of armed conflict, Brooks suggested.

How do we prepare for this new world of conflict? At least in part by building the space for discussion.  As Former Deputy Assistant Secretary of Defense for Plans Janine Davidson noted, learning to adapt will be essential. But as she noted, former Secretary of Defense Donald Rumsfeld was right when he said that since you go to war with the army you have, not the one you want, it’s critical to institutionalize lessons beforehand.

Former Undersecretary of Defense for Policy Michele Flournoy summed up one of the biggest lessons learned in the past few years: decision making is improved with a diverse set of opinions at the table. Fortunately, the field’s lack of diversity is changing. According to Senator McCain, there are more women on the Senate Armed Services Committee staff than ever before. In 2013, Secretary of Defense Leon Panetta repealed the ban on women serving in ground combat units.  As one Army captain wrote in the Washington Post last year, allowing women to serve in front-line units isn’t just “an exercise in social equality” but also “a valuable enhancement of military effectiveness and national security.”

There is much more to do in establishing a diverse discussion space, and not merely along gender lines. As war becomes more complex and uncertain, we’ll need diverse perspectives and ideas more than ever.

About the Author

David Sterman
David Sterman is a research associate for New America's International Security Program and a graduate of Georgetown’s Center for Security Studies.

Is the Cold War Over Encryption at a Boiling Point?


Since Edward Snowden blew the lid off of the National Security Agency’s broad range of bulk surveillance and hacking programs—including the NSA’s secretly tapping directly into Yahoo and Google’s private data links, and its use of a vast catalog of security vulnerabilities in a range of U.S. tech companies’ hardware and software products—relations between the feds on the East Coast and techies on the West Coast have been downright chilly. From the perspective of many in the American tech industry, the NSA’s actions represent an “Advanced Persistent Threat” similar to the cyber-threats posed by organized crime or Chinese intelligence, while also threatening their bottom line by undermining worldwide consumer trust in the security of American companies’ products.

The relationship between the feds and techies got even chillier over the winter, when the FBI director and the U.S. attorney general criticized Apple and Google for securing the data on iPhone and Android smartphones with strong encryption that only the phone’s owner could bypass, and when President Obama seemed to agree with U.K. Prime Minister David Cameron that tech companies should build surveillance backdoors for the government into their products.

The relationship practically iced over in the past week as not one but two bombshell stories broke about how the NSA is undermining the security of our computers and cell phones: first, the story that NSA has figured out how to hide spyware in the firmware of a wide variety of brands of computer hard drives, so that the infection persists even when the hard drive is completely wiped and the operating system is reinstalled; second, the story that NSA had supported the U.K.’s signals intelligence agency GCHQ in breaking into the servers of SIM card manufacturer Gemalto and stealing millions of encryption keys enabling mass cellphone surveillance.

That icy conflict turned hot this Monday at a cybersecurity conference hosted by New America (where I work for the Open Technology Institute) to launch its new Cybersecurity Initiative.

That icy conflict turned hot this Monday at a cybersecurity conference hosted by New America (where I work for the Open Technology Institute) to launch its new Cybersecurity Initiative. There, the director of the NSA was confronted by the head of security at Yahoo, who had a simple question: If the federal government cares so much about cybersecurity, why does it want us to make our products less secure?

More: 7 things to read on Net Neutrality

A transcript of the question-and-answer exchange between Yahoo Chief Information Security Officer Alex Stamos and Adm. Mike Rogers, director of the NSA and U.S. Cyber Command, is available here. But it basically boiled down to this: Stamos wanted to know why Rogers agreed with FBI Director James Comey that companies should build backdoors into their encrypted products to facilitate government surveillance, when all the technical experts say that cannot be done without opening users up to threats other than the government. In response, Rogers quibbled with the use of the term “backdoor” just as Comey has—“We aren’t seeking a back-door approach,” Comey said in an earlier speech on the topic; “We want to use the front door”—and stated his belief that it was “technically feasible” that surveillance capability could be built into products without otherwise compromising security, so long as we put in place an appropriate legal framework to guide its use.

However, as noted security expert Bruce Schneier put it later in the conference during his own keynote conversation: “It’s not the legal framework that’s hard, it’s the technical framework.” Put another way, as Schneier has blogged before, “there’s no technical difference between a ‘front door’ and a ‘back door’,” only a semantic difference, and whatever you call it, it will undermine security overall. Stamos likened the introduction of backdoors into encrypted products to “drilling a hole in the windshield”—by trying to provide a narrow entry point just for the government, you end up undermining the overall integrity of the encryption shield. Indeed, as Stamos pointed out in his exchange with the NSA director, “all of the best public cryptographers in the world would agree that you can’t really build backdoors in crypto”—a fact that can be verified by looking at this extensive bibliography of all of the writing on the subject that’s been published since the Apple/Google crypto debate first flared up last year. When Rogers replied that he had a lot of “world-class cryptographers” at the NSA, Stamos indicated that he had talked to some of them too and they agreed with his position. Echoing Stamos, ACLU technologist Chris Soghoian tweeted his expectation that there would be “facepalms” back at NSA HQ by mathematicians embarrassed by their director’s statements.

Related: If you’re living in the United States, chances are you pay more for slower internet.

By joining with the FBI director and the attorney general in condemning encryption that doesn’t allow for government snooping, Rogers on Monday increased the chances that the cold war between the feds and the techies is about to get hot. However, President Obama himself offered a much more nuanced position just a couple of weeks ago while visiting the West Coast for the White House’s Cybersecurity Summit at Stanford. In an interview after that summit—where Apple CEO Tim Cook argued that he and others in his industry had a responsibility “to do everything in our power to protect the right to privacy”—the president offered an olive branch on the encryption issue and backed away from the stronger statements of his law enforcement and intelligence officials, saying that he was “a strong believer in strong encryption,” that “there’s no scenario in which we don’t want really strong encryption.” And although he recognized that such technology may pose challenges to law enforcement and that “we’re really gonna have to have a public debate” about how to address those challenges, he suggested that “I lean probably further in the direction of strong encryption than some do inside of law enforcement.”

But when the president, or the NSA director, or anyone else in government calls for a public debate on the issue, they should be reminded: we already had this debate twenty years ago in the so-called “Crypto Wars” of the ’90s.

The techies on the West Coast should be heartened by the president’s comments, even if they would have preferred an even stronger statement in favor of encryption, and even if they were ultimately unimpressed by the president’s call at Stanford for more cooperation between government and industry on cybersecurity. (“Why are people going to want to share with a government that’s weaponizing our technologies?” asked one commentator). But when the president, or the NSA director, or anyone else in government calls for a public debate on the issue, they should be reminded: we already had this debate twenty years ago in the so-called “Crypto Wars” of the ’90s. When faced with the choice between strong encryption or government backdoors, policymakers ultimately chose strong encryption, recognizing that it was the cornerstone of information security and therefore also a cornerstone of the information economy and American competiveness in a global tech marketplace.

Today’s policymakers should learn from that history and follow the advice of the Review Group appointed by the president to examine the NSA’s programs: The U.S. government must support, rather than undermine, the use of strong encryption. Following that advice would help mend the fences between the feds and the techies and better ensure that government and industry can work together to address the serious cybersecurity threats that we all face. However, if we fail to heed the lessons of the Crypto Wars, we will be doomed to repeat them, and in a war between the tech industry and the federal government, everyone’s security will suffer.

This article originally appeared on Future Tense, a collaboration among Arizona State University, New America, and Slate

About the Author

Kevin Bankston
Kevin Bankston is Policy Director of the Open Technology Institute at New America. He has spent his career advocating for digital rights, formerly serving as Director of the Center for Democracy & Technology's (CDT) Free Expression Project, and Senior Staff Attorney at the Electronic Frontier Foundation (EFF).

Technology for the People, By the People

New America

If you use a smartphone or your kid has a tablet, you should already be wondering how we can educate, engage, and retain diverse talent in tech. But experts say you should also be curious to know why Ida B. Wells is Aliya Rahman’s favorite data scientist. On this episode, Anne-Marie Slaughter talks with Rahman, program director of Code for Progress, as well as Megan Smith, Chief Technology Officer at the White House, and Jessica Rosenworcel of the FCC, about how leadership can make an impact in tech. Their conversation is excerpted from a discussion at a recent event at New America hosted by the Open Technology Institute, “Technology for the People, By the People.”

The Environment of Social Justice


This story is part of a series, From Moment to Movement: Conversations on Race in America, produced by New America in collaboration with Howard University.

What’s one of the biggest security challenges in the black community? If you’ve been following the news, you probably think it’s related to bias-based policing or criminal justice.

Here’s a challenge you may not have heard about: the dearth of black environmental leadership. African Americans comprise 12 percent of the nation’s population but all of the communities of color combined (i.e., 38 percent of society) hold only 12 percent of the leadership positions in environmental organizations – both inside and outside the government. That’s not just a problem for blacks, but for our entire society.

It’s easy to see why the disparity is a big problem for the black community: perhaps as a consequence of this lack of leadership, there’s a significantly higher concentration of environmental hazards and degradation in black communities, from more toxics-releasing facilities and air pollution in general to more brownfields (real estate that has been contaminated by a pollutant of some kind, and cannot be reused until it has been completely remediated). This pollution in black communities drives up morbidity, stress, and mortality statistics while driving down neighborhood economic investment, political clout, social capital, school performance, and community pride.

 Environmental degradation and attendant health impacts in black communities through concentrated pollution are likely on their way to other communities through climate change.

But why, exactly, is this a problem for those of us who are not black, or do not live in a predominantly black neighborhood? Because environmental issues metastasize, and because we can’t expect to innovate solutions without bringing to bear a diversity of perspectives, particularly from people who have vastly different lived experiences.  African Americans who grew up in certain communities may have a deeper understanding of the real-life impacts of environmental degradation, allowing them to craft better policies for all of us.  Law professors Lani Guinier of Harvard and Gerald Torres of Cornell, using the metaphor of canaries in mines, describe what happens to blacks in America as a portent of what is likely to happen to middle- and working-class whites.  Environmental degradation and attendant health impacts in black communities through concentrated pollution are likely on their way to other communities through climate change.  Black communities may suffer earlier and more intensely from the ills of social inequality but they are eventually manifested in other communities that perhaps thought they were somehow immune.  There is no substitute for African Americans’ perspectives on environmental policies that are informed by both their lived experience and their grasp and application of scientific knowledge.

We’ve seen some modest policy wins under the Obama administration. Americans’ health will likely be improved as a result of its requirement to reduce mercury emissions and other poisons from some 1,400 fossil-fuel power stations.  And in 2013 the administration tightened restrictions on carbon dioxide emissions, resulting in a reduction of over 35%. What’s more, new coal facilities can be built only if they are able to capture and store on the order of 30% of their emissions.  But elections have consequences; these environmental policy gains can be reversed in the future by politicians who are supported by relentless fossil fuel concerns.  Americans need a healthy dose of ruff and tumble politics of energy and the environment.

Related: Ari Rattner on the Black-Jewish alliance.

Some of the worst environmental offenders remain at large: Industrial polluters who, like other “corporate citizens” want to be regulated and taxed substantially less while being subsidized significantly more. Corporations seek to appropriate nature for private gain while the costs of environmental abuse are shared among all of us. As Oil Change International notes, the public bears externalized costs of fossil fuel industries for the military, climate, local environmental, and health to the tune of at least $360 billion and upwards of $1 trillion annually.

And the fossil fuel sector is determined to delay the point when renewables are cost competitive with fossil fuel-based electricity.  One strategy used by fossil fuel advocates includes removing statutes that call for states and municipalities to have a specified share of electricity within their jurisdiction be supplied by renewable sources.  For example, Ohio-based energy companies such as American Electric Power and FirstEnergy together with fossil fuel-backed entities like the American Legislative Exchange Council (ALEC) and Americans for Prosperity successfully helped produce a bill (SB 310) that Ohio Governor Kasich signed into law in June of last year. The bill delays annual increases in the share of renewables as part of the overall supply of electricity for the state and energy efficiency for two years.  Derivatives of this strategy are taking root in many of the remaining 29 states with such statues to increase renewable energy usage.

  But the next question is– where are the African American environmental leaders?

If we don’t elect or appoint more leaders who have lived the impact of such policies, we’re putting our collective future at risk. But the next question is– where are the African American environmental leaders? Is this a bias problem – in other words, qualified African American candidates not being promoted or elected – or a pipeline problem – not enough African Americans engaged in the field?

It’s likely related to both factors, but the latter is somewhat easier to solve systemically.

There are two institutional approaches to engage more blacks on the issues of climate, energy, the environment, and social justice– through historically black colleges and universities ( HBCUs) and Africana Studies programs at other universities. Only about a quarter of the 100 HBCUs have some type of environmental studies program.  And although only 9 percent of black college students matriculate at HBCUs, these institutions have a unique platform to bend campus-wide attention around environmental matters in part because of their smaller size. Just as all HBCU students learn about the black experience while studying various subject matters, they all can also learn about how environmental issues pose the most significant challenge and present the biggest opportunities of the 21st century.

More: Why Ferguson should be more than a moment.

As for Africana or African American Studies, there are over 250 of such programs around the country, primarily at predominantly white institutions (PWIs).  PWIs are also home to the nation’s largest and well-resourced environmental academic programs.  Essentially all research-intensive and top liberal arts PWIs have at least one environmental studies program.  Generally, black students do not enroll in environmental studies courses on PWI campuses but they do tend to enroll in some number of Africana Studies courses.

At present, less than five percent of Africana Studies academic units and professors identify the environment, climate change, or energy as their prioritized area of research and engagement.  Similarly, less than five percent of Africana Studies courses take up these subjects.

My suggestion to African Studies professors: use your classes and curricula as a platform to discuss issues concerning climate, energy, the environment – all intimately related to the topic of social justice, which is entwined through many Africana Studies lectures already.  And for HBCUs, create and strengthen environmental degree programs and staff them with leading academics in the field.  After all, today’s black students are tomorrow’s black intellectuals and engaged citizens. If more of them can become leaders in the epic struggle to steer society toward sustainability, we’ll all be better off.

About the Author

Rubin Patterson
Rubin Patterson is Professor and Chair of Sociology & Anthropology at Howard University and a Research Associate in the Department of Sociology at the University of the Witwatersrand in Johannesburg, South Africa. His new book is titled, Greening Africana Studies: Linking Environmental Studies with Transforming Black Experiences.

Admiral Rogers Visits a New Neighborhood


The dress code said it all.

When Admiral Michael S. Rogers, Director of the National Security Agency, Cyber Command Commander, and recipient of the Navy Distinguished Service Medal recently walked into a cybersecurity conference, his uniform bore twenty ribbons and four badges from his esteemed Navy career. Rogers’ hair was neat and precise, in full compliance with Navy regulation on grooming standards for personal appearance.

His keynote followed the first panel, who wore jeans, cardigans, and button-down shirts with rolled-up sleeves.  Kevin Bankston, Policy Director of the Open Technology Institute, quipped that the “answer to cybersecurity is letting people wear hoodies in DC.”  With a long pony tail and full grey beard, Bruce Schneier—known as one of the world’s best cryptographers—was, for instance decidedly not in compliance with Navy Uniform regulation.

These two cultures have to come together to bridge the twin goals of keeping us safe from online threat, while preserving our liberties online.  Bringing the uniforms and the hoodies into one conversation was both a reflection of the challenge, and a call for solutions.

Listen: Go inside the NSA with Shane Harris and Anne-Marie Slaughter

First, the challenge.  Both Schneier and Alex Stamos, chief information security officer for Yahoo! sparred with Rogers over the issue of building backdoors into source code that would grant government access to important information that technology companies have been resistant to share. Apple and Google recently announced that its software would encrypt all data so the government can’t access it, even if the companies were presented with a warrant.

“It sounds like you agree with (FBI) Director Comey that we should be building defects into the encryption in our products,” said Stamos, who compared backdoors to “drilling a hole in the windshield”, which would leave code dangerously vulnerable to malicious hacking. You can read more about the exchange here.

“That would be your characterization,” Rogers shot back.

If this sounds like an adversarial exchange, that’s because it was, emblematic of the tension between the government –uniforms – and technology companies – hoodies. When the White House hosted a cybersecurity summit this month at Stanford University, for example, Apple C.E.O. Tim Cook gave a blistering defense of user privacy. “History has shown us that sacrificing our right to privacy can have dire consequences.” Cook said.

This conference served as the launch of a new Cybersecurity Initiative, which combines expertise from a multiple fields to address the challenges posed by cyber threats. One point of agreement among many of the technologists, coders, and tech executives—not to mention audience members—was that the threats faced by both the NSA and tech companies don’t live in a vacuum. They are shared, interdependent, and from both state and non-state actors.

And, the next step may be not to just get the uniforms and hoodies into one room, but to bring more technologists into policy work.

“It would be nice to see more technologists do more of these (policy) things.”

“If you look at the leadership positions of cybersecurity, they are not the people who have actually done this.” said Cheri Mcguire, Vice President for Global Government Affairs & Cybersecurity Policy at Symantec. “It would be nice to see more technologists do more of these (policy) things.”

And in addition to the diversity in who is creating policy, there needs to be greater diversity in who designs products, Tara Whalen, a privacy analyst at Google pointed out. Low-income communities are more likely to use products that are not designed to their specific needs. You have to “design with people and not just for people. You want to hear from the users and not just the people who feel they know that the users want,” she said.

Because cybersecurity represents an increasingly urgent threat to governments and businesses alike, new capabilities are required to meet them. Assistant Attorney General John Carlin announced during his panel that he would consider indicting those who assist with ISIS social media efforts. “It is a new way to propagandize and reach individuals in a very targeted fashion in their home, and the ability to produce the slick propaganda is cheap and widely available,” Carlin said. “It presents a new threat.”

More: Here are the costs of NSA surveillance on American companies.

The phenomenon of ISIS using social media to recruit foreign fighters also illustrates a broader problem with the Internet. While the Internet has democratized, informed, and made us laugh, it has also served to silo us off from each other. We can surround ourselves in an echo chamber we hear only our own opinions are heard. So if the adversarial exchanges between Rogers and technologists served a greater purpose, it was simply to confront and engage the other side.

“Rarely do we have a conversation on cybersecurity that engages in tough questions,” reflected New America president and CEO Anne-Marie Slaughter. Yet throughout the conference, it was apparent that many of these questions needed answers. Is a cyber-attack a declaration of war? What are the norms around cyber-attacks? How should a state respond?

 In his view, we are living in a kind of wild west, where ideas that will shape our strategic thinking are still being formed.

Rogers compared the current cybersecurity landscape to the first 10 years in the debate about nuclear deterrence. In his view, we are living in a kind of wild west, where ideas that will shape our strategic thinking are still being formed. He suggested that the answers to these questions might come from the academic community, similar to the nuclear deterrence theories of Henry Kissinger and Thomas Schelling. “There is a place in the academic world for this kind of discussion [when it comes to cyber].” he said

Yet the analogy between cybersecurity and nuclear deterrence might not be so straightforward. For example, In his exchange with Rogers, Schneier argued a technological approach to encryption was more important than legal approach. Rogers disagreed, contending that a legal approach to encryption was more important. This rift is problematic because the way each side frames the issue leads them to different policy outcomes.

It seemed the answer all of these questions, and improve usability and cybersecurity in particular, may be an embrace of diverse thinking. The challenges that the Internet presents will not be straight-forward, but rather asymmetric. We will not know when or where the next cyber attack will take place, or in which dorm room the next billion dollar company is launched. Every computer is a tool capable good or bad.

Unless it’s a Seth Rogen movie. Then it’s a tool for evil.

About the Author

Justin Lynch
Justin Lynch is the Social Media Coordinator at New America.

How The Arab Spring Exposed the Limits of American Power


When it comes to Revolutions, timing may be everything. The Middle East has now endured four years of uprisings with no peaceful end in sight, in part because it had the historical misfortune of entering a revolutionary moment at the same time that the U.S.— and the wider Western world— was marked by its own profound period of political and economic dysfunction.

This isn’t just a historical lesson. As Congress debates whether to give President Obama a new AUMF  (Authorization for Use of Military Force), and as the 2016 presidential hopefuls propose their solutions to America’s foreign policy challenges, we need to determine what needs to be done differently, now that our economy is on a growth path, even if our politics remain dysfunctional.

When long-time dictator Zine El Abidine Ben Ali fled Tunis on January 14, 2011, the United States was in its ninth year of post-9/11 warfare. The country was still slogging through the Great Recession. Having elected a President who pledged to draw down the conflicts in Iraq and Afghanistan and focus on “nation-building at home,” the American public had little appetite for renewed engagement in the Middle East. The Obama Administration, moreover, had emphasized the need to “pivot” to the country’s long-term strategic and economic interests in the Pacific.

As uprisings spread in quick succession— to Egypt, Bahrain, Yemen, Libya, Syria, and beyond— Washington was also in its first months of divided government.

As uprisings spread in quick succession— to Egypt, Bahrain, Yemen, Libya, Syria, and beyond— Washington was also in its first months of divided government. A new Republican majority in the House promised to constrain the powers and purse of the Obama Presidency. Foreign assistance, while only comprising approximately one percent of the federal budget, was an appealing target. Meanwhile, the machinery of American foreign policy— the Pentagon, the State Department, USAID, etc.— had long since become ossified by bureaucratic decay. Strategic thinking and implementation was overly reliant on the military, inflexible, and ill-prepared for such a massive upheaval.

More: Here is the power of American angst.

Europe was likewise occupied by economic crisis. While the Middle East simmered, Greece, Portugal, Cyprus, and Ireland needed urgent assistance. The wider Eurozone teetered at the economic abyss, as Brussels and Berlin struggled to cope with the fallout. Japan, meanwhile, was soon consumed by the triple disaster— earthquake, tsunami, and nuclear crisis— of April 11, 2011.

In this context, the G8’s initial response to the Arab Spring— the Deauville Partnership announced at their summit in France in May 2011— unfortunately ended up becoming as much an exercise in obscuring the G8’s collective lack of resources for the “transitioning countries” than the outpouring of support that it was branded as and hoped for.

Yet, the G8’s response was merely another manifestation of a wider crisis of confidence in the West, and indeed, in the entire international system: Were the BRICs on the verge of becoming the world’s dominant economic bloc? Was the Chinese model of authoritarian capitalism more capable of growth and immune to downturns? Or were these emerging economies themselves waiting for their own crises?

As the Arab Spring erupted, there was accordingly no obvious model for the region to follow— and there were severe constraints on the ability of Washington and the West to influence its course.

Four years later, we see the results: The Arab Spring to date has been a quagmire of epic proportions— or more accurately stated, a series of epic quagmires. To be sure, even if the U.S. government had the “perfect” policies, its ability to influence the outcome would have been limited. The dominant causes and course of events have been set by conditions in the region: autocratic governments that failed to provide a modicum of freedom or opportunity for their citizens; the growing divide between Islamists and traditional authoritarian regimes; the sectarian divide between Sunni and Shia; and the region’s failing economic, educational, and social systems— embodied foremost in the systematic marginalization of women, young people, and minorities.

Had the Arab Spring occurred in a different period, the U.S. government might have been in a better position to assist— as it did with the Marshall Plan following the Second World War or the SEED Act following 1989. The Mideast, similarly, may have seen the West as more of a magnet— perhaps akin to post-Cold War Latin America, where distrust of the West lingered even the region increasingly adopted western norms.

Related: Will the radical center save the United States?

From the perspective of the Middle East, the western political model has instead seemed weak while the eastern model has remained distant. The Islamist model has been discredited— just as the secular authoritarian model before it. The region has splintered. Violence has surged. And the underlying challenges— the widespread lack of human dignity, political freedom, and economic growth— remain.

Can the region now be turned around?

In some ways, the ingredients for sustainable change remain. A new generation— more than half of the Arab world is under the age of 25— is both awakened to the region’s challenges and connected to the wider world in ways not seen, perhaps, since the apex of Arab civilization. Across the region, the Arab Spring brought to the fore a series of challenges that, while unresolved, are now firmly on the agenda in many countries.

While the region must rightly retain control over its own fate, there is more that the Washington can do to help – and more that it must learn from recent mistakes as it ramps up its war against ISIS.

In the initial stages of the Arab Spring, the Obama Administration was rightly respectful of the fact that change was being driven from the region. But President Obama has too often appeared overly cautious or indecisive. Congress is also blameworthy for repeatedly displaying a penny wise, pound-foolish and needlessly partisan approach to foreign policy.

American economic strategy in the aftermath of the Arab Spring serves as a cautionary tale. The United States initially sought to help stabilize countries like Egypt, Jordan, and Tunisia through increased economic aid. This came in response to calls from the countries themselves; Egypt, in particular, called for debt forgiveness. In a highly resource-constrained environment, the Obama Administration attempted to meet these requests. But Congress endlessly delayed— long before Islamist parties came to power— even when it involved only the reallocation of existing funds.

Meanwhile, we relied on funding from others, like the Gulf States, each of whom had their own interests. And so conditions on the ground deteriorated, American credibility weakened, and the moment was lost.

Of course, U.S. taxpayers cannot possibly cover the cost of global crises alone. But when it comes to diplomacy, there is a basic truth: without our own resources on the table, our influence diminishes.

In today’s Washington, there is often money for military action. But advancing American interests through aid (once an accepted, bipartisan  strategy) has become an “affront” to fiscal responsibility— even if it is often far more cost-effective than putting boots on the ground.

America’s military might, meanwhile, has proven difficult to utilize in a sustained manner.

America’s military might, meanwhile, has proven difficult to utilize in a sustained manner. In Libya, the American-led air campaign against Qaddafi was successful. Ensuring stability has proved far more difficult. Libya has turned into an example of the powerlessness of American power; our military can only go so far in addressing another nation’s underlying challenges.

Syria, now an ISIS stronghold, has become the perfect storm of the Arab Spring— combining (and indeed exceeding) the worst aspects of each uprising: the kleptocracy of Egypt, the sectarianism of Bahrain, the chaos of Libya, the poverty of Yemen, all rolled into a brutal proxy war involving major regional and global powers.

Its fate, likewise, is a microcosm of the Mideast’s troubles. When Syria eventually emerges from its nightmare— a prospect for which there is no obvious path— the country will face a vast array of challenges: establishing security and a functioning government; ending sectarian warfare; rebuilding the country’s infrastructure; and addressing the underlying crises that caused the revolution in the first place. In short, Syria needs to rebuild a functioning state and society almost from scratch. This will be a long, difficult, and Syrian-led process. But it’s essential that the U.S. remain engaged to safeguard its interests.

Yet American policy on Syria remains predominantly disengaged, perhaps understandably so. Syria is a problem from hell. President Obama’s choice to keep it at arms length has been a reasonable strategic choice. The U.S., moreover, has done its best to mitigate the refugee crisis and support various attempts at a diplomatic solution. It also achieved a victory in the elimination of chemical weapon stockpiles held by the Assad regime.

Nevertheless, the cumulative effect of reasonable reluctance in Syria— as across so much of the region— has appeared more as lack of strategy than strategic choice.

While the U.S. government will never alone be able to solve the problems of the Middle East, in a region where both American credibility— and the entire democratic model— is increasingly questioned, our policies must demonstrate renewed competence and coherence.

You can’t ask for a do-over when it comes to global events, but you can build toward a reset by taking firm and confident steps, such as the new Defense Secretary Ash Carter’s move to convene a hands-on strategy session on the war against ISIS.

More importantly, as Congress debates the Authorization for Use of Military Force against ISIS,  it must use this opportunity to work with the Administration in Syria and beyond. It’s time to expand our strategy in the entire Mideast to all facets of our national power: military, diplomatic, economic, and inspirational. It’s time to turn a reactive series of tactics into a strategy that flows from our renewed strength as a superpower.

About the Author

Ari Ratner
Ari Ratner is a fellow at the New America Foundation. From 2009-2012, he served as an appointee at the State Department. Follow him on Twitter at @amratner.

Beware the Big Data Gospel


Earlier this month, I published an article on CNN.com that examined and described the limits of big data as an instrument of progress. I won’t rehash the arguments of that article here (I do hope you’ll read it), but I want to respond to two critiques of the piece from Marco Lübbecke, a professor of operations research, on CNN.com and Thomas Davenport, author of Big Data at Work, on a Wall Street Journal blog.

Lübbecke’s counter-claim that “big data saves lives” is emotionally manipulative and unsupported by any evidence he puts forth. The antithesis Davenport sets up between “data and analytics” on the one hand and “unaided human intuition” on the other is a dangerously misleading simplification that wrongly conflates intuitive knowledge with arbitrary subjectivity. Davenport also fails to account for the fact that data compilation and analysis is performed by human beings and is therefore neither automatic nor objective. As philosopher Michael Polanyi thoughtfully characterized tacit knowledge, “we can know more than we can tell.” Polanyi’s point remains true no matter how many zetabytes of storage and petaflops of processing power one has at one’s disposal.

Quantitative analysis of data has been central to what we’ve come to call science since well before the word “science” existed.

Quantitative analysis of data has been central to what we’ve come to call science since well before the word “science” existed. Babylonian astronomers gathered data about solar eclipses 2600 years ago, and used that data to predict future eclipses. As science and technology have evolved over the millennia, so too have tools for gathering data and for analyzing it. Many of those tools are invaluable to the scientific endeavor. Adherents, like Lübbecke and Davenport, to the church of big data, fail to see what legal scholar Julie Cohen points out: “Big Data is the ultimate expression of a mode of rationality that equates information with truth and more information with more truth, and that denies the possibility that information processing designed simply to identify ‘patterns’ might be systematically infused with a particular ideology.” My attempt is to interrogate that ideology, not to deny that the creation and analysis of quantitative data is a necessary part of science.

More: Here are the dangers of big data.

A close reading of the examples Lübbecke puts forth to illustrate the life-saving potential of Big Data illustrate the hollowness of Davenport’s claim that “at the core of analytical decision-making is not soft fad, but hard science.” Lübbecke cites polio vaccination campaigns as an “outstanding example of the way that Big Data saves lives.” His evidence is a paper co-authored by scientists at the Centers for Disease Control (CDC), but his reading conflates the very real ability of vaccines to save lives with the life-saving ability of convoluted analytical techniques about how effective vaccines are. The relevant question is the virtue of analytical techniques about how vaccines ought to be optimally applied, not the virtue of the vaccines themselves.

The CDC paper concludes that “sustained intense immunization efforts” are better than “wavering commitments” to immunization. I don’t doubt this is true enough. Common sense would dictate that sustained efforts are better than wavering commitments. But what value does data-driven analysis add to this proposition? Analytic techniques, Lübbecke points out, yield the claim that the Global Polio Eradication initiative (GPEI) has and will save between $40 and $50 billion from 1988 to 2035, and that Vitamin A delivered along with polio vaccines accounts for a further savings of between $17 billion and $90 billion.

Do the numbers $17-90 billion tell us anything that the words “lots of money” do not? The CDC journal article goes on to quote the director of Rotary International’s anti-polio campaign: “We regularly use the $40–50 billion estimate of net benefits of the GPEI as we raise funds to finish polio eradication.” This goes to the point I was trying to make in the original piece. It’s not that anyone should really have confidence that the polio eradication campaign saved $45 billion +/- $5 billion. It’s that saying so is an effective fundraising technique. Pretending that a range of $17-90 billion conveys more information than “a lot of money” is where an uncritical acceptance of the virtue of data goes off the rails.

Lübbecke is correct in his generic call for careful analysis; but he doesn’t follow through on his own prescription.

It’s simply a category mistake to attempt to come up with a specific number for the economic impact of polio eradication. It is not as if there is some accurate figure, say $47,253,238,334, which more sophisticated methodology will allow us to pin down. No such number exists, and all the economists in all the business schools can’t reliably find it. A world in which fewer people die of polio is a different world and, I would argue, a better one. The true case for vaccination is a moral one that rests on lives saved and people saved from the ravages of polio, not on a dollar figure of benefit to the economy.

However, the polio example isn’t as laughable as Lübbecke’s other example purporting to demonstrate the life-saving benefits of big data: a blog post by Edward Kaplan of Yale that discusses a business-school study of the number of “counterterror agents” the US needs. Lübbecke endorses the model that Kaplan uses, in which the “number of counterterror agents drives the rate with which [terrorist] plots are detected.” But Kaplan’s model is ludicrously oversimplified. He doesn’t clearly define who “counterterror agents” are. Do police officers, DEA agents, and bureaucrats with the Department of Homeland Security count? Do customs agents? Do US Marshals?

More: Here is the power of American angst.

Is the probability that a terrorist plot is uncovered really simply a function of the number of agents, as in Kaplan’s model, and not of factors like the agents’ intelligence, legal constraints and technological tools? Kaplan and Lübbecke highlight that though the “model suggests an optimal staffing level of only 2,080 agents,” in 2004 the FBI had 2,398 agents “dedicated to counterterrorism”. (Lübbecke incorrectly states that 2004 is the most recent year for which FBI staffing figures are publicly available, though a quick search finds this Department of Justice report, which gives a number of 3,445 FBI agents “addressing counterterrorism matters” in 2009.) In any case, the juxtaposition of the number Kaplan’s model spits out with the number of FBI counterterrorism agents in 2004 is hardly, as Kaplan characterizes it, “interesting,” let alone of tangible life-saving benefit, as Lübbecke claims.

The paradox at the heart of the argument Lübbecke and other cheerleaders for big data make is that they claim to place great value on evidence as opposed to intuition.

The paradox at the heart of the argument Lübbecke and other cheerleaders for big data make is that they claim to place great value on evidence as opposed to intuition. But rather than present analytical evidence for the value of “evidence,” they merely assert that it is tremendously useful and expect us to believe them.

Lübbecke’s examples point to the silliness of big data’s claim to epistemic superiority, but they don’t adequately illustrate the damage that can be done by big data evangelists like Davenport. To understand that damage, one must parse the political economy of data creation and analysis, something I began to do in the earlier piece. In short, using data along the lines Davenport advocates imposes costs on society unequally. As my colleague Seeta Gangadharan has written, “There’s a real threat that the negative effects of algorithmic decision-making will disproportionately burden the poorest and most marginalized among us.”

These are not new fights. Steven Shapin, an historian of science, was writing about the 17th century when he remarked, “it is just when the authority of long-established institutions erodes that the solutions to such questions about knowledge come to have special point and urgency…Method, broadly construed, is the preferred remedy for problems of intellectual disorder.” Blind faith in the superiority of “big data” or “well-designed analytics” does not resolve underlying intellectual discord about how society ought to guard itself against terrorism or structure its economy.

Lübbecke and Davenport seek objective certainty where it is not attainable. They do not seriously wrestle with the limitations of data-driven analysis but merely make a fetish of it.

About the Author

Konstantin Kakaes
Konstantin Kakaes is a program fellow with the International Security Program at New America.

Income Based Repayment Plans: Comforting but Not a Cure for Student Debt


There was a time when conventional wisdom said that student debt is not a problem in and of itself—rather, “high” debt of $100,000 or more is the more pressing concern. A recent report from the Federal Reserve Bank of New York highlights just how out of touch that view is. A staggering percentage of Americans do not pay their student debt, no matter how big or small.

Analysis reveals that 34 percent of students with just $5,000 of outstanding debt—hardly “high”—default on their student loans. Student debt imperils far more than just individual borrowers’ monthly budgets. It erodes higher education’s ability to deliver on the promise that those who have similar abilities and work equally hard will achieve similar outcomes. Unfortunately, the prevailing  policy response—Income-Based Repayment (IBR) plans—does not address the core of the problem.

Concern about rising default rates has spurred increasing calls for greater access to IBR plans, which set repayment expectations at 15 percent of the federal student loan borrower’s post-college income. Those who do not pay off their loans within 25 years can have their remaining debt forgiven. These features make IBR schemes less a solution to actual problems and more of a sort of self-soothing device for the American people to feel better about loans. Parents and older Americans don’t want to see young adults default. Student borrowers want some reassurance that they will be able to pay off their student loans and still feed themselves. Policymakers need to say they’re doing something on the issue of student debt. In the meantime, the true threat—student indebtedness itself—continues unabated.

More: Why we need a new college admissions strategy. 

The Obama administration has waged a successful campaign to promote access to IBR plans—estimating in 2010 that $6.6 billion in loans would be repaid through IBR, a number that today has risen to $27 billion. This number is likely to grow even more, thanks to recent changes that have expanded “Pay-As-You-Earn” eligibility, another type of IBR scheme which caps the borrower’s monthly payment at about 10 percent of their discretionary income while forgiving their remaining debt after 20 years of making payments.

Students with outstanding student debt, even very small amounts, are more likely to postpone accumulating assets as young adults…

This is why IBR misses the mark: because it currently doesn’t do enough to address one of the key ways student debt may negatively affect young adults, by limiting their ability to accumulate assets. Students with outstanding student debt, even very small amounts, are more likely to postpone accumulating assets as young adults, as recent research shows. IBR plans may even exacerbate this problem by extending the period of students’ indebtedness.

Asset accumulation is important, because it positions young adults for significantly improved economic outcomes over their lifetimes—something higher education is supposed to do. The consequences of diverting income to debt repayment instead of asset accumulation may worsen the wealth divide between those who must take on debt to go to college and those who can avoid it.

Rather than a self-soothing mechanism that allows us to maintain the current financial aid model, we need a truly new direction, one that helps students get to and through college, and prepares them with a solid financial foundation upon joining the workforce. We can plan for a different future, one that favors asset empowerment over debt dependency.

What might an asset-empowered future look like? Giving every child a Child’s Savings Account would be a good start. These accounts would hold an initial deposit at birth and offer the opportunity for matching funds paid through public funds. Child Savings Accounts would be a critical part of a strategy to foster expectations among very young students that they should receive postsecondary education and equip them early and often with strategies to pay for it. Researchers refer to this as helping kids develop a college-saver identity. All families would be able to save into the accounts, but public investments, like Pell Grants, could be delivered strategically to a kid’s account early enough in her academic trajectory to shape achievement and grow into larger balances.

…we should not invest in IBR plans with the expectation that they are a “cure”; at best, they are a costly stopgap measure that mask the underlying problem we face…

These are admittedly long-term solutions that don’t address our increasingly urgent short-term need to help those already saddled with student debt.  But, we should not invest in IBR plans with the expectation that they are a “cure”; at best, they are a costly stopgap measure that mask the underlying problem we face, overreliance on student debt. Let it be clear, IBR plans are necessary only because of a growing recognition that student debt places a destructive burden on some young adults that is counter to our view of education as the “great equalizer.” Our financial aid system should strengthen the return on a post-secondary degree not weaken it.

Related: Are programs like General Assembly the future of higher education?

It is long past time for public policy to take a dramatic new course. We have to stop thinking about financial aid as only important for influencing access to college, with our only goal being to make sure kids have money to pay for college. We must consider how it impacts preparation for college, access, completion, as well as young adults’ long-term financial health. Considered against this more comprehensive metric, it is clear that over use of student loans is a disturbing—even destructive—practice, and maybe just as obvious that IBR should, at best, be seen only as a short-term solution while we address the real underlying problem, overreliance on student debt.

About the Author

William Elliott
William Elliott is an associate professor at the University of Kansas (KU) and founder of the Assets and Education Initiative (AEDI) a Center in KU’s School of Social Welfare and a Senior Research Fellow in New America's Asset Building Program.

When Seeking to Financially Include Youth, Parents Matter

Much ink has been spilled about the saving habits of adults, but what do we know about how and why children save money? It’s an important question, because one-third of the world’s population is under the age of 19. Statistics show that children who save money are more likely to set goals for their future and do better in school, and less likely to engage in risky behaviors.

Pramod, a 13-year-old living in Bhaktapur, Nepal, is one such child. He used to spend his lunch money playing cyber games and his time cutting school. Now, he saves his money in a bank account instead and dreams of joining the army and buying a house.  Mercy, another 13-year-old, hails from Naivasha, a market town northwest of Nairobi in Kenya. The main industry where she lives is agriculture, but Mercy wants to become a lawyer so that she can help people who are suffering. Since 2012, she has been saving in a bank account to help with her school fees.

Even though most people would agree that child welfare and saving money are critical concerns, making policy for youth financial inclusion…is a tricky matter—especially in developing countries.

Even though most people would agree that child welfare and saving money are critical concerns, making policy for youth financial inclusion (broadly defined as achieving full, safe, and appropriate inclusion of children and youth in the financial services and products) is a tricky matter—especially in developing countries.  Social norms and financial conditions vary from country to country and consensus about what strategies will be most effective remains elusive.

Nonetheless, initiatives like YouthSave—a project I work with at New America that has developed, delivered, and tested savings products accessible to low-income youth—continue to work with banks in countries like Nepal and Kenya to increase flexibility around youth account ownership and control. While youth want control and independence over their own bank accounts and the funds therein, they also need guidance, financial education, and age-appropriate protections.  From allowing trusted adults to cosign on accounts instead of parents to providing youth with the means to check their account balances independently, banks in the developing world have been working to give young customers what they want.

But in Nepal, Kenya, and the other countries (Ghana and Colombia) where YouthSave has partner banks, research shows a much more complex picture. As it turns out, children and adolescents save more when parents co-sign on their accounts. Teenagers of course wanted more autonomy, but their efforts to save are more successful when their parents are involved.

For banks that are looking to offer products to young people, not to mention anyone who works on youth economic development issues, these findings support the idea of a more nuanced approach. Despite the relative autonomy that youth may desire, demand, or enjoy in any given context, parents cannot be ignored;  in fact, they may be critical to giving staying power to potentially poverty-reducing strategies and programs.

And, importantly, involving parents in their children’s savings accounts may well have positive impact on the parents’ lives as well. Where banks take a holistic and inclusive approach to the financial products they offer for young people, some parents have also begun to participate more actively in saving.

With parents as partners in youth economic development, the financial outlook for the family overall looks brighter.

With parents as partners in youth economic development, the financial outlook for the family overall looks brighter. As Martin Mwaura, an associate program manager at K-Note, an NGO that delivers financial education to youth account holders, puts it: when parents realize “that ‘we can do better when we have a bank account,’ then it also means that by extension they are going to support the young people after they open their account.”

Related: Why youth savings in the developing world is good for business and the community.

For young people like Pramod and Mercy, this means a lifeline between the present and their aspirations. Not only is it more likely that money will be there to help cover future shortfalls for educational expenses or to seed a micro-enterprise, it is also true that children and families are jointly investing in and shaping the future in a concrete way.  And, for families living on an economic precipice, this  means building resilience for the next generation from the start.

About the Author

Scarlett Aldebot-Green
Scarlett Aldebot-Green is a senior policy analyst at New America.  She works on the Global Asset Project’s YouthSave Initiative.  Before joining New America, Ms. Aldebot-Green worked in the field of human rights and development in Central America.  Most recently, she was the Assistant Director of the University of Washington Center for Human Rights.