Big Business is now creating chronic U.S. underemployment

One of the perennial problems in accurately measuring the U.S. labor market is how to handle “underemployment” or involuntary part-time employment by people who want to be working full time. The official Bureau of Labor Statistics defines this category as follows:

Persons employed part time for economic reasons are those who want and are available for full-time work but have had to settle for a part-time schedule.

 
Those people, whether working 30 hours a week or 10 hours a week, even at or near minimum wage, are ineligible for unemployment insurance benefits or virtually any other program that would help someone who was completely jobless. Any paying work at all, even when it’s not enough to make ends meet, usually kicks people out of eligibility for such programs.

More than five years after the peak of the 2007 U.S. recession, many Americans find themselves in this category of being “employed part time for economic reasons.” The U6 measure of unemployment, which factors these people into the official rate, stood at 12.1% in June 2014 — just shy of being double the official unemployment rate. Almost 7 in 10 part-time workers right now would like to work full-time.

The decision to leave underemployed people out of the official unemployment figures, as I’ve been arguing for five years, has probably been a major factor in not recognizing the severity of many of the emerging structural problems in the part-time work arena that ripple back into the wider consumer economy negatively. Instead, we were busy congratulating ourselves for two decades on supposedly having much “lower” unemployment than Western European economies.

Those economies, which generally use comprehensive definitions of unemployment much closer to our U6 metric, were rarely substantially higher than our U6 rate of unemployed plus involuntarily underemployed persons. Moreover, their “unemployed” people were, in fact, often working part-time (legally or illegally) at rates the same as or higher than our labor force was. So their unemployed/underemployed populations were in far less dire straits than ours during the same period, even without getting into the differences in social safety nets.

Let’s examine one of the big emerging problems that such measurement definitions helped obscure: Involuntary part-time employment for corporate profit reasons, rather than genuine economic reasons.

Often, at least in the past, the “economic reasons” for the lack of full hours came in the form of hours cutbacks (in place of mass layoffs) or general economic belt-tightening, during economic contractions/slowdowns/recessions, by those in positions to be hiring. That’s especially true at the small-to-medium business level.

But a far more insidious and damaging trend has exploded on to the scene from the Big Business end of the spectrum, as huge American corporations not only decline to hire more and more of their hourly wage workforce for full workweeks but then demand these part-time workers be “on-call,” without compensation, to work at virtually any hour, day or night, seven days a week. The schedule changes from week to week and from day to day, at the discretion of the corporate managers.

Almost half of all part-time workers, according to the Times, now have one week or less of advanced notice on their schedule. Among 26-32 year olds working part time, that figure is 47%. Beyond young workers, this problem disproportionately affects women and non-white workers.

In an ongoing series of articles from the New York Times examining the prevalence and consequences of this pernicious staffing practice, we can read example after example of people being forced not only to work part-time but to be available full-time without pay to work the paid hours, which prevents workers from taking second jobs to supplement their hours or finding a better/full-time job or completing their education. Here is one testimonial:

“You had to be available every minute of every day, knowing you would be scheduled for no more than 29 hours per week and knowing there would be no normalcy to your schedule,” he wrote. “I told the person I would like to be scheduled for the same days every week so I could try to get another job to try to make ends meet. She immediately said, ‘Well, that will end our conversation right here. You have to be available every day for us.’

“I asked, ‘Even though I’m trying to get another job?’ ‘Yes.’ Then she just stared at me and asked me to leave. What kind of company does this? What kind of company will not even let you get another job?”

  Read more

Should government programs be funded Moneyball-style?

congress-slider

In the Big Data age, everyone wants to measure things — and see if they can be made to work better. It’s a good impulse in most cases, but is it being applied appropriately to government?

In a new NYT post, David Leonhardt examines trends in testing government programs for quantifiable effectiveness. He notes initially that despite a widespread public suspicion of central government (or any) in this country, the Federal government actually does have a pretty impressive track record in a lot of areas. But it also has come under increasing fire in the past twenty years for being slow to adopt popular private-sector tools for measuring effectiveness of dollars for outcomes.

Of the 11 large programs for low- and moderate-income people that have been subject to rigorous, randomized evaluation, only one or two show strong evidence of improving most beneficiaries’ lives. “Less than 1 percent of government spending is backed by even the most basic evidence of cost-effectiveness,” writes Peter Schuck, a Yale law professor, in his new book, “Why Government Fails So Often,” a sweeping history of policy disappointments.

As Mr. Schuck puts it, “the government has largely ignored the ‘moneyball’ revolution in which private-sector decisions are increasingly based on hard data.”

And yet there is some good news in this area, too. The explosion of available data has made evaluating success – in the government and the private sector – easier and less expensive than it used to be. At the same time, a generation of data-savvy policy makers and researchers has entered government and begun pushing it to do better. They have built on earlier efforts by the Bush and Clinton administrations.

The result is a flowering of experiments to figure out what works and what doesn’t.

 
Now, I support measuring government programs to try to make them better. But there are immediate red flags for me surrounding the “how” part of measuring and the “what happens next” after the measuring.

I have four major areas of concern about this trend:

1) Who gets to determine the definitions of “cost-effective” or efficient? Who sets the cutoff points for when a program is simply too ineffective or not getting enough bang-for-the-buck to continue? Do these people consider realities on the ground and the lives affected or just look at spreadsheets?

Are the people creating measurement systems representatives of the people at large and the communities being served by the programs? Are they comprehensively trained in the relevant area backgrounds? Are they just more Wall Street-turned-public-servant-turned-future-lobbyist folks? Are they trying to measure things just to prove government “doesn’t work”?

2) Are we currently under-funding many of these programs so severely and chronically that we can’t effectively demonstrate success they might otherwise have if consistently funded at appropriate levels? Are we going to cut off money to these “under-performing” programs that we’ve already starved of money?

In education, in particular, we’ve seen the paired trend of measuring performance standards (which I agree is very important) and then tying Federal funding to districts and local funding to teachers to these results without first making the changes (including funding increases!) necessary to improve the results. Are we also going to start taking away money from programs that aren’t “improving” enough each year because they’re already doing well? (This was the famous backfiring of No Child Left Behind in high-performing education states like Massachusetts and New Jersey.)

To return to the Leonhardt article for a moment (my bolding added):

New York City, Salt Lake City, New York State and Massachusetts have all begun programs to link funding for programs to their success: The more effective they are, the more money they and their backers receive. The programs span child care, job training and juvenile recidivism.

The approach is known as “pay for success,” and it’s likely to spread to Cleveland, Denver and California soon. David Cameron’s conservative government in Britain is also using it. The Obama administration likes the idea, and two House members – Todd Young, an Indiana Republican, and John Delaney, a Maryland Democrat – have introduced a modest bill to pay for a version known as “social impact bonds.”

 
Republicans have moved the goalposts so far since the start of the Reagan Administration with their view that “government is the problem, not the solution” that everything seems to be catered toward “proving” this claim by decades of intentionally “starving the beast” — under-funding/de-funding programs across the board by slashing revenues to pay for them — and then measuring outcomes afterward.

Back to my areas of concern…

3) While it’s important to get as much out of each dollar invested as possible (so you can use as much of the money as possible for as many people as possible), many public functions are public because they are not effective money-makers and need to be funded regardless of balance sheet results. Sometimes things just aren’t all that “cost-effective,” yet are necessary for the promotion or execution of certain social and economic goals.

In fact, the push for “cost-effectiveness” as a measurement skips over the fact that the goal of some programs is to provide an emergency economic floor, below which citizens should not be able to fall, rather than being designed to lift them up. A floor is not an elevator, and you wouldn’t measure a floor’s elevation over time to find out if it’s getting you closer to the top of the building. Many of the War on Poverty programs, in particular, don’t get nearly enough credit for being a major force in keeping total destitution in check even during long recessions and stagnant recoveries, because critics are too busy asking why they haven’t outright ended poverty.

4) Is this just another way to insert private sector profiteering in the middle of public functions that don’t need them? For example, we’ve already seen Goldman Sachs forcing its way in on the revenue stream of the Massachusetts prison system to do something the state could do and, in doing so, taking away money the state could be re-investing to help more ex-convicts stay out of prison.
Read more

Washingskins

Greg had a great post the other day on the controversy surrounding the Washington football team’s long-standing “nickname” — Redskins. I’ve spent more time than is healthy reading internet comment sections on this issue, so I consider myself an expert on the arguments against changing the name. Most are as vacuous as they are willfully ignorant.

“But what about the Fighting Irish? Or the Vikings? Or the Celtics? We would have to change them too.”

This one is my favorite. As if there’s no difference between mascots named for the predominant ethnic group in the area and a racial nickname for the sports team in a capital that once carried out an active policy of ethnic cleansing against that racial group. As someone with Norwegian, Irish, and Native American ancestry, I am particularly amused by these comparisons. No Minnesotans of Swedish origin are up in arms about being compared to their Viking ancestors. Sure, they did a fair share of pillaging, but Vikings had democratic assemblies and were among the most advanced seafaring civilizations of their era. Boston Celtics? The history of the basketball team cannot be told without contextualizing it within Boston’s identity as an Irish city (for better and for worse). Fighting Irish is a similar situation. It was coined by a member of Notre Dame’s own football team and embraced by the Irish Catholic student body, unless there’s a massive uprising against it we’re not hearing about.

“But Indians support the name!”

Then they’ll point to some ten year old poll and talk about how there are Redskin team names on Native American reservation. Oh, ok. No problem then. I guess the fact that some Natives don’t care negates those that do. I’m pretty sure that’s how it works. Just like if your black friend lets you say slurs, you’re free to yell them around other black people and then name your business a racial slur. No problems there.

“Uhhg it’s the LIBERALS trying to PC everything up.”

Ah, the old P.C. canard. I prefer to think of this issue less as enforcing “politically correct” terminology and more making sure none of our sport teams are named something blatantly racist. When the Redskins were founded by vile racist George Preston Marshall, the term Oriental was in vogue to refer to Asian people. Does that mean we should name a team the Orientals? What about the Coloreds? That word and a few others were pretty popular around that time to refer to black people. Dan Snyder, owner of the Redskins, is a Jewish person. I wonder if he would appreciate a team named the Hebes (or worse) with a hooknosed Jew clasping a bag of gold. I know for sure the Washington Darkeys wouldn’t fly today, but this is essentially what we are talking about: an antique word, a relic from an era where pasty scientists obsessively studied the differences between the “races” and learned men established white dominion over the language.

“Don’t be weak and choose to be offended.”

This is perhaps the stupidest. First of all, you don’t really choose to take offense. But second of all, I don’t think Natives are “offended,” so much as sick of the bullshit. I know a decent amount of Natives, most of whom grew up on/close to a reservation. I once asked a friend of mine about the name Redskins and he replied,  “It’s not that it’s offensive — it’s just really dumb.” He went on to describe numerous different ways a team could honor Native Americans without going right to skin color. This guy has also paler skin than I do, but grew up in Oklahoma in a Native family. Just demonstrates the name is inaccurate in addition to stupid.

“It’s not racist! The Redskins are named after their Indian former coach”

This ones complicated and involves draft dodging. The Redskins were founded in Boston and originally shared a field with the Boston Braves baseball team. When they moved to Fenway Park, home of the Red Sox, the name was changed to Redskins. Marshall was quoted at the time saying the name change was to avoid confusion with the other Native-referencing team, NOT because a coach was Native. It is also not at all clear the coach Marshall referenced, Willie “Lone Star” Deitz, was actually a Native American. Deitz was put on trial and briefly imprisoned for draft dodging WWI by claiming he was a Native American. The jury could not conclude he knowingly lied about his heritage, but a number of discrepancies in the Deitz family’s testimony, as well as his interactions with his supposed sister and her Sioux tribesmen, suggest otherwise. So not only is this origin story not true, the supposed Indian coach likely wasn’t a Native.

“There are bigger issues we should be concerned about”

The history of the United States is a road paved with Native American bones. Since coming to this continent, Europeans have cleansed Natives from their lands, reneged on treaties, massacred Native women and children, conducted biological warfare against them. Many Native Americans live today in extreme poverty, while rape and sexual violence are epidemics on reservations. These are obviously much more pressing issues than a stupid team name. But the name Redskins remains a window into an epoch (one not yet entirely over) where racism was blissfully casual and the Washington football marching band wore headdresses and played “Dixie” before the anthem. Perhaps it is a fitting name for Washington D.C.’s team after all.

Banging around in the dark in Mesopotamia

Mesopotamia: the land between two rivers, the buffer between eastern and western empires, the perennial peripheral battle zone between Persia and her Western challengers. Today: Iraq.

Parthian Empire at its greatest extent. Credit: Keeby101- Wikimedia

Parthian Empire at its greatest extent. Credit: Keeby101- Wikimedia

The Republican Iraq policy of the past 12 years has been to stumble through a pitch-black room of broken hazards with no awareness of what’s behind, what’s nearby, or what’s ahead. But they triumphantly declare that we’re almost there and everyone else is an unpatriotic traitor who wants us to fail.

On the other side of the room, Iran — supposedly the Republicans’ greatest boogeyman since the Soviet Union and at least the second greatest since 1979 — is standing silently with night vision goggles and thousands of years of knowledge of what’s behind. If we get too close, it steps out of the way, but remains inside the room. It used to wait outside the room, in the darkened hallway, but then the Republicans opened the door by trying to get into the room, as if they had no idea Iran was even next to the room and that that closed door had been the only thing keeping Iran out.

I don’t have an instinctive, overriding opposition to Iran the way some do. But if the goal was to keep Iran from gaining influence over more territory, there had to be a counterbalancing force left in Iraq which was in opposition to Shia Iran. And that would have meant a minority-led dictatorship, statistically speaking. Which is not a reason to support the existence of such a thing (though we had managed to mitigate it without dismantling it by 2002). But their bumbling haste meant they didn’t even attempt to reconcile that their regional policy goals — taking out the Iraq regime and containing Iran — were at complete odds. Today they find themselves rallying to the Shia-aligned government whose biggest friends are in Tehran, condemning the President for not unquestioningly dumping money, bombs, and troops into the cause, even as they condemn his efforts to negotiate a nuclear solution with Tehran.

Worst of all: Still today they refuse to entertain the idea that the 2003 invasion was a mistake of vast and sweeping proportions on all possible fronts. A crippling, even rippling, disaster at home in our politics, economy, and budget; the elephant in the room on U.S. foreign policy for years to come; and a massive disruption that upended the delicate Middle Eastern balance and tore a fragile country apart into a bloodbath. At least most Democrats who supported it, despite how obviously bad an idea that it was at the time, have the decency to admit they screwed up.

The continued denial of reality on Iraq, let alone repentance, from Republicans right now makes me politically angry in a way I haven’t felt since George W. Bush was in office. Their genuinely — not even strategically or cynically — held belief that President Obama is to blame for what’s happening now is merely the infuriating cherry on top of a rage sundae.

Oped | American Unexceptionalism & The Republic

640px-Veen01

The real story of the origins of the U.S. political system. Composite oped from two new essays (here and here) in The Globalist.

You’ve read the story before. A number of loosely aligned, merchant-dominated offshore territories of a European empire begin chafing at their distant monarch and the high taxes he imposed without giving them a reasonable say in their own governance.

Predictably enough, their mounting dissatisfaction is met with an increasingly overbearing response — including military deployments. That strategy is pursued until the provinces reach a breaking point.

They declare themselves free of the faraway king and initiate a rebellion. Not all the territories are persuaded to join. Some prefer to remain loyal to the crown.

The rebellion binds together a small collection of sovereign entities into a union, equipped with a weak, loosely formalized provisional government. Its purpose is to direct the union’s foreign policy and manage the rebellion.

Government after monarchy

Having declared themselves without a king, the newly independent elite must devise a replacement system of government for the continued union.

For a time, they consider the possibility of bringing in another member of the European nobility to serve as king. Such an invitation or election of an outsider as king was common in Europe for centuries, from Poland to Sweden to the Holy Roman Empire. Even the papacy is an elective monarchy.

But eventually the merchant elites and past commanders of the rebellion decide they have been doing fine without a royal. They are now content to continue to strengthen the temporary system as it is.

The exceptional story takes shape

These elite gentlemen look around at other precedents for other self-governing states without kings. Smaller free states — such as Florence and Venice — had previously installed non-hereditary systems of rule by the commercial elites and major families. They had called them “republics,” after the elite-run classical “Roman Republic.”

Elections in such systems are highly indirect and susceptible to manipulation. They are also restricted to a very small number of participants. Essentially, the only voters are members of the propertied, male elite — usually white.

After all, they make no secret of the fact that this exclusionary voting franchise suits the new country’s leaders’ aims anyway. They are not interested in creating a democracy. Rather they are keen to establish a republic insulated from the passions of the mobs.

It is then agreed that under the new union of rebel provinces, each member republic would send delegations to the union’s government, but they will be answerable to their home governments. This further would keep the regular people away from any major levers of power.

Finally, they devise an elaborate system of checks and balances. The ostensible purpose is to preserve the sovereignty of the member republics within the union. The bigger purpose is to prevent the “tyranny” of a central government and an executive. The union will have weak powers of taxation — only enough to mount a common defense of the member republics.

This is, of course, the story of the Republic of the Seven United Provinces in Netherlands and their departure from Spain — about two centuries before the United States constitution was ratified by thirteen former British colonies.

So much for America’s origin story being exceptional, as claimed for so long.

The Dutch precedent was a model, both to be emulated and avoided, for the framers of the U.S. Constitution and those advocating for its adoption. Far from being an “exceptional” idea, the original version of the United States was just the latest iteration of an existing system. It is what came after that that made it exceptional.

Almost before the ink had dried on the U.S. Constitution, the United States and its citizenry — indeed, even its government officials — began adapting the document in other directions the framers had never intended or anticipated.

That is probably for the best, from the world’s perspective, and from the country’s, since rule by a narrow slave-owning elite is not exactly a paragon of excellent governance for the world to follow.
Read more

Maybe let’s stop trying to “help” Iraq for a second

Iraq-NO-FLY-ZONES-map-1991-2003There’s this sort of myth, which sprung up in the 2004-2008 period of the U.S. War in Iraq, that the world could not just tolerate Saddam Hussein’s cruel rule a moment longer than March 2003. This was used to justify the invasion retroactively when it turned out that there were no weapons of mass destruction, which was the original stated reason.

Of course, it’s a bit odd on its face to claim that the cruelty of his regime was suddenly supposedly intolerable by the end of 2002 (more so given that it took more than four months between the U.S. authorization of force and the actual invasion), in a way that it had not been a decade and a half earlier, when he was an American ally.

Moreover, not only was the Iraqi regime not actively committing some kind of genocide at the time of the invasion (again, not the stated reason for intervention at the time it happened), but the U.S. and the world had already pretty effectively contained the regime over the course of the 1990s.

No, Iraq wasn’t doing great by the end of 2002 — it certainly couldn’t be expected to under the weight of massive sanctions from around the world — but it was probably more stable and less violent than it had been for quite some time. And that wasn’t by chance.

With US/UK no-fly zones operating continuously from 1991 to 2003 to protect the Kurdish population and the Shia population from Iraqi air campaigns — the U.S. alone flew more than 200,000 missions by the beginning of 1999, often taking a lot of Iraqi anti-air fire — along with other protective measures for the persecuted zones, the bad old days were basically over (by comparison to what preceded or followed anyway).

Again, I recognize it was certainly far from ideal. Dissenters and sectarian minorities were still being persecuted at the hands of the regime in the Sunni areas and on the ground in the south, but that’s a situation that happens in authoritarian regimes the world over. In stark contrast with the 2003-2011 U.S. war, the ad hoc solution from 1991-2003 involved very little loss of life on either side — somewhere in the range of 50 people were killed during the no-fly zone operations — and it prevented the regime from going around bombing and gassing everyone (or sectarian extremists from killing each other). By the end of 2002, the world had a pretty solid handle on keeping Iraq stable and non-genocidal. Then George W. Bush’s invasion happened.

At that point, the whole country broke. Just plain fell apart into total violence and an extremists’ free-for-all where everyone could avenge every old wrong. We had no plan, no resources, no experience, and not enough troops, and we just decided to wing it. And the inevitable result was just total pandemonium and wholesale destruction.

We did that. That’s on us. And we never really did fix it before we left. (Which isn’t an argument for staying indefinitely, as I’ll get to in a moment.) There’s no way that was better than the default situation in 2002 where the persecuted populations were protected from mass slaughter and everyone else had it bad but weren’t living in an unending hell (one which never ended).

Now as ISIS pours over the border from Syria, capturing four major cities (including the country’s second largest) and perhaps soon the northern oil fields, while the Iraqi Army falls back into a chaotic retreat, U.S. pundits — the war everywhere always brigade — are asking each other whether it’s time to re-intervene in Iraq.

Because apparently a permanent U.S. military engagement from 1990-2011 wasn’t enough. Because apparently we didn’t do enough damage in the second round while trying to “help.”

Look, I’m no isolationist. I’m actually even an advocate for humanitarian military intervention in many cases, to the point of annoying some other progressives. But I try to be smart about it and historically conscious. And right now, right there, this just isn’t one of those cases.

Iraq now is like when you break an antique, keep trying to help fix it, and everything else around it breaks in the process and the big pieces keep breaking into smaller pieces. Finally your grandma tells you “just stop.”

Colin Powell is alleged to have once said, prior to the 2003 invasion, that there was going to be a “you break you bought it” policy on Iraq with regard to American involvement. Thomas Friedman dubbed this the “Pottery Barn rule,” in spite of Pottery Barn not actually having such a rule. But either way, one imagines that if you blew up an entire Pottery Barn store, you would not be asking “hey so do you think I should go back and help out?” two years later when someone else accidentally broke a vase.

We need to stop “helping” Iraq.

Constitutional rebellions

Constitutional-Convention-slider
Should constitutions include an official principle of the people’s right to rebel against their governments?

There has always been a bit of (or a lot of) tension between those who believe the right to revolt is natural and inalienable at all times versus those who believe all transitions must be orderly, legal, and constitutional.

As a pressure valve for self-preservation, the latter camp tends to adopt constitutional systems (formally or informally) that allow for regular turnover, either by frequent election or by scheduled leadership changes. Britain’s modern parliamentary system, for example seeks to keep rebellion in check by making it relatively easy to bring down governments that are messing up, via orderly no confidence votes and early elections. In another example with similar motivations, the current Chinese government leadership has five year terms now between internal party elections and has age limits, to guarantee turnover.

The U.S. model tends to release the pressure through a combination of semi-frequent elections (though no early elections for the presidency, ever) and very formalized removal procedures for misconduct. So, civilians can remove other civilians constitutionally from power and transfer the power down an established chain without elections, and it’s not a coup d’état.

Still other systems allow for less turnover but implicitly favor mass demonstration as the best way to express opposition. The various French Republics, descending from the awkward marriage of a powerful central executive (originally the king) and multiple revolutions, managed to arrive at a strange compromise under De Gaulle’s 5th Republic after 1958. That compromise was to have (more or less) a nearly omnipotent president elected to seven year terms (with more than one term permitted), almost no formal way to express opposition (e.g. no early elections, weak parliament, etc.), and then to just continue to let unions, students, and other protesters go wild in the streets (or at least go on mass strikes) when they became sufficiently furious over something. As in all 15 French constitutions, the one implemented in 1958 included a “right to resist oppression.” This compromise setup posed various problems for the 5th Republic, but it’s certainly been more stable and stronger than the third or fourth republics, which basically collapsed under their own inefficacy. (Both the first and second ended in fluid transitions into dictatorship.) Eventually, though, they did moderate it down to five year terms at the beginning of this century.

In the United States, of course, there’s been lots debate since 1776 (or even before) about whether (and when) people can overthrow their governments. Through repeated use of military force domestically by the government, as well as consistent court decisions, the consensus has been achieved that it’s pretty much not ok to overthrow or take up arms against the U.S. government… unless you count that last time when they waged a war of separation against the British Empire and various loyalist populations. So, had any of those later insurrections — whether in Appalachia, Western Massachusetts, the Confederacy, or among the American Indians — prevailed, I guess it would have been a different story. (And indeed, that one uncomfortable, local armed coup d’état in North Carolina in 1898 went largely ignored by the U.S.) But at the very least, it has been made clear that there is no legal or constitutional right to overthrow the government of the United States, even if perhaps there is a Jeffersonian-style “natural” right to give it your best shot and see what happens.

But there’s also a very curious compromise in a number of countries, occupying a middle ground between the “transition must be legal” faction and the “revolution is a natural right” faction. A study by Daniel Lansberg-Rodriguez, Tom Ginsburg, and Emiliana Versteeg (discussed here after the recent Thailand coup) found that 20% of countries today (up from 10% in 1980) with formal constitutions in effect have adopted constitutional provisions explicitly protecting the right of the people to rebel, revolt, or otherwise topple their governments. Some of them are as vague as the French provision I mentioned above. Others, under the Turkey model, are much more explicit in carving out a role for the country’s military to intervene against the civilian leadership when it oversteps (or is perceived as overstepping) against the people or “democracy” or secularism or whatever.
Read more