Tumgik
#it’s pretty much all western states with higher wage based
aroguexenolith · 4 months
Text
If your employer fires you without cause you can file for unemployment benefits! Which probably most people know, but what people DON’T know is that former employees claiming unemployment benefits can make the employer have to pay higher taxes. Not like, a ton, but still—you get to hit back just a little bit.
2 notes · View notes
arcticdementor · 3 years
Link
Today the richest 40 Americans have more wealth than the poorest 185 million Americans. The leading 100 landowners now own 40 million acres of American land, an area the size of New England. There has been a vast increase in American inequality since the mid-20th century, and Europe — though some way behind — is on a similar course.
These are among the alarming stats cited by Joel Kotkin’s The Coming of Neo-Feudalism, published earlier this year just as lockdown sped up some of the trends he chronicled: increased tech dominance, rising inequality between rich and poor, not just in wealth but in health, and record levels of loneliness (4,000 Japanese people die alone each week, he cheerfully informs us).
Kotkin is among a handful of thinkers warning about a cluster of related trends, including not just inequality but declining social mobility, rising levels of celibacy and a shrinking arena of political debate controlled by a small number of like-minded people.
The one commonality is that all of these things, along with the polarisation of politics along quasi-religious lines, the decline of nationalism and the role of universities in enforcing orthodoxy, were the norm in pre-modern societies. In our economic structure, our politics, our identity and our sex lives we are moving away from the trends that were common between the first railway and first email. But what if the modern age was the anomaly, and we’re simply returning to life as it has always been?
Most of the medieval left-behinds would have worked at home or nearby, the term “commuter” only being coined in the 1840s as going to an office or factory became the norm, a trend that only began to reverse in the 21st century (accelerating sharply this year).
Along with income stratification, another pre-modern trend is the decline of social mobility, which almost everywhere is slowing (with the exception of immigrant communities, many of whom come from the middle class back home).
Social mobility in the US has fallen by 20% since the early 1980s, according to Kotkin, and the Californian-based Antonio Garcia Martinez has talked of an informal caste system in the state, with huge wage differences between rich and poor and housing restrictions removing any hope of rising up. California now has among the most dystopian of income inequality, with vast numbers of multimillionaires but also a homeless underclass now suffering from “medieval” diseases.
Unfortunately, where California leads, America and then Europe follows.
Patronage has made a comeback, especially among artists, who have largely returned to their pre-modern financial norm: desperate poverty. Whereas musicians and writers have always struggled, the combination of housing costs, reduced government support and the internet has ended what was until then an unappreciated golden age; instead they turn once again to patrons, although today it is digital patronage rather than aristocratic benevolence.
A caste system creates caste interests, and some liken today’s economy to medieval Europe’s tripartite system, in which society was divided between those who pray, those who fight and those who work. Just as the medieval clergy and nobility had a common interest in the system set against the laborers, so it is today, with what Thomas Piketty calls the Merchant Right and Brahmin Left — two sections of the elite with different worldviews but a common interest in the liberal order, and a common fear of the third estate.
Tech is by nature anti-egalitarian, creating natural monopolies that wield vastly more power than any of the great industrial barons of the modern age, and have cultural power far greater than newspapers of the past, closer to that of the Church in Kotkin’s view; their algorithms and search engines shape our worldview and our thoughts, and they can, and do, censor people with heretical views.
Rising inequality and stratification is linked to the decline of modern sexual habits. The nuclear family is something of a western oddity, developing as a result of Catholic Church marriage laws and reaching its zenith in the 19th and 20th centuries with the Victorian cult of family and mid-20th century “hi honey I’m home” Americana. Today, however, the nuclear household is in decline, with 32 million American adults living with their parents or grandparents, a growing trend in pretty much all western countries except Scandinavia (which may partly explain the region’s relative success with Covid-19).
This is a return to the norm, as with the rise of the involuntarily celibate. Celibacy was common in medieval Europe, where between 15-25% of men and women would have joined holy orders. In the early modern period, with rising incomes and Protestantism, celibacy rates plunged but they have now returned to the medieval level.
The first estate of this neo-feudal age is centred on academia, which has likewise returned to its pre-modern norm. At the time of the 1968 student protests university faculty in both the US and Britain slightly leaned left, as one would expect of the profession. By the time of Donald Trump’s election many university departments had Democrat: Republican ratios of 20, 50 or even 100:1. Some had no conservative academics, or none prepared to admit it. Similar trends are found in Britain.
Around 900 years ago Oxford evolved out of communities of monks and priests; for centuries it was run by “clerics”, although that word had a slightly wider meaning, and such was the legacy that the celibacy rule was not fully dropped until 1882.
This was only a decade after non-Anglicans were allowed to take degrees for the first time, Communion having been a condition until then. A similar pattern existed in the United States, where each university was associated with a different church: Yale and Harvard with the Congregationalists, Princeton with Presbyterians, Columbia with Episcopalians. The increasingly narrow focus on what can be taught at these institutions is not new.
Similarly, politics has returned to its pre-modern role of religion. The internet has often been compared to the printing press, and when printing was introduced it didn’t lead to a world of contemplative philosophy; books of high-minded inquiry were vastly outsold by tracts about evil witches and heretics.
The word “medieval” is almost always pejorative but the post-printing early modern period was the golden age of religious hatred and torture; the major witch hunts occurred in an age of rising literacy, because what people wanted to read about was a lot of the time complete garbage. Likewise, with the internet, and in particular the iPhone, which has unleashed the fires of faith again, helping spread half-truths and creating a new caste of firebrand preachers (or, as they used to be called, journalists).
English politics from the 16th to the 19th century was “a branch of theology” in Robert Tombs’s words; Anglicans and rural landowners formed the Conservative Party, and Nonconformists and the merchant elite the core of the Liberal Party. It was only with industrialisation that political focus turned to class and economics, but the identity-based conflict between Conservatives and Labour in the 2020s seems closer to the division of Tories and Whigs than to the political split of 50 years ago; it’s about worldview and identity rather than economic status.
Post-modern politics have also shaped pre-modern attitudes to class. In medieval society the poor were despised, and numerous words stem from names for the lower orders, among them ignoble, churlish, villain and boor (in contrast “generous” comes from generosus, and “gentle” from gentilis, terms for the aristocracy). Medieval poems and fables depict peasant as credulous, greedy and insolent — and when they get punched, as they inevitably do, they deserve it.
Compare this to the evolution of comedy in the post-industrial west, where the butt of the joke is the rube from the small town, laughed at for being out of touch with modern political sensibilities. The most recent Borat film epitomises this form of modern comedy that, while meticulously avoiding any offence towards the sacred ideas of the elite, relentlessly humiliates the churls.
The third estate are mocked for still clinging to that other outmoded modern idea, the nation-state. Nation-states rose with the technology of the modern day — printing, the telegraph and railways — and they have been undone by the technology of the post-modern era. A liberal in England now has more in common with a liberal in Germany than with his conservative neighbour, in a way that was not possible before the internet.
Nations were semi-imagined communities, and what follows is a return to the norm — tribalism, on a micro scale, but tribalism nonetheless, whether along racial, religious or most likely political-sectarian tribes. Indeed, in some ways we’re seeing a return to empire.
The middle-class age meant the triumph of bourgeoise values and the decline of the middle class has led to their downfall, widely despised and mocked by believers in the higher-status bohemian attitudes. Now the age of the average man is over, and the age of the global aristocrat has arrived.
2 notes · View notes
dxmedstudent · 6 years
Note
Obviously only tangentially related to you, but do you know why are medications so expensive in the US? My friend said she was holding off on getting asthma medication and when I asked her why, she said it costs $50 with insurance? I take the same one and it costs the equivalent of $2 without insurance. Same thing when I suggested she take mefenamic acid for her period pain; it doesnt exist over the counter there. How?
I’m not an expert in the economics of pharmaceutical research, so my answer will be an incomplete one, but I expect that collective medblr can add some knowledge. It’s a very complex, emotive problem that has far-reaching consequences for all of us, wherever we work.
The rest is under the line, for brevity.
In order to bring a drug into being, a drug has to be developed through research. Though universities and hospitals and government agencies play a significant and often ignored role in research, much of that burden falls to the large pharmaceutical companies who have the money to sink into research; it’s not a venture that many can afford, and if a drug isn’t effective or ends up having bad effects, companies can even go bankrupt. As a result, when a medication comes out, it is usually under patent for a set period. This means that nobody else can make that medication without the permission of the company who holds the rights over that medication. That monopoly in turn means that the company can decide a price for that medication knowing that they are the only ones making it. Once the patent ends, any company can make that drug, and prices will drop, because competing labs will sell the product for a much lower price. The prices often start considerably higher than it costs to make the medicine; that’s true of every product we buy to varying degrees. In theory, the prices of medicines are meant to reflect the money that has been put into their research; the company only has a set time to recoup the costs and make a profit, because when the patent ends, they won’t be able to command anywhere near that price for the product. That’s how it’s supposed to work, in theory. However, whether the prices are ‘fair’, or whether they are inflated higher than they need to be, in order to generate profit, is a matter of fierce debate. Medication prices vary considerably across the world; and sometimes for good reason arguably companies have to adjust prices to local markets. For example, there’s no way the average person in India making $616 in US money would be able to afford the same prices as US patients; not when the difference in what money is made is very considerable. I’ve seen posts from the US angry that they are being charged more for a medication than someone in a much poorer country, who feel like they are ‘subsidising’ others, but in eality that product is still very expensive in that country, relative to people’s wages over there. It’s much more complex than that. That’s why there have been real battles between Western pharm companies and local companies in poor countries who have been making patented medication at a cheaper price (despite the legal and IP implications), serving populations who could never afford the ‘lower’ prices that Western countries set for their market.
Where it becomes difficult is the ethics of pricing. How much is a fair price to charge? Companies need to pay their employees, and modern capitalism relies on profit, but how much profit is acceptable? What exactly is a fair margin on medications, when you want to feed enough money back into the company to create more research, but don’t want to drive costs out of the reach of the people who need it?
I think insurance companies themselves have to shoulder significant responsibility in this context. They choose what they pay out for, and if they choose to make patients shoulder the majority of the cost of an expensive drug, then they aren’t really fulfilling the role of insurance. Which is, in its simplest form, ‘I pay a little bit regularly so that when I have a big cost, I don’t need to shoulder that all in one sitting’.  Working in a system where insurance is entirely unneccessary, I am not comfortable with the role insurance companies play in healthcare across much of the world, nor of what I hear about how it works in the US in general. It often makes me deeply, deeply uncomfortable, and I extend my sympathies to the people who have to deal with these companies as healthcare providers and advocates of patient care, and for patients themselves who find themselves at the mercy of them. I don’t personally believe healthcare insurance companies should be an integral part of healthcare,, in part because that’s how it works here.  I believe that it would genuinely save patients and the state so much money and stress if more countries took on a model similar to the NHS or some of the other European countries. Under the US system, the government (nad therefore people, through taxes) already pay more than we do in the UK per head, and then also, people pay a lot for insurance on top. Which really doesn’t sound like a fair deal to me.
In the UK, our formulary, the BNF tells us how much something costs to prescribe; because the NHS picks up the majority of the cost. So we know when a drug is no longer under patent, and what we prescribe as first line can change when this happens. We’re encouraged to evaluate effeciveness and price, and high prices can have an effect on official guidelines on an NHS-wide scale. We don’t pay more than the price of a prescription per month, and if you are on benefits, a child or have particular health conditions, then you pay nothing for your prescriptions. Even though prescriptions here are under £10, and they can be monthly, that can still be a challenge for people who are struggling, so I can’t even begin to imagine dealing with the kinds of prices that people deal with in the US. For the most part, the price of a medication here isn’t passed onto patients directly; the problem arises only if NICE deem a medication too expensive compared to its effectiveness and decide that it can’t be used. And the way that treatments here are not always rationed eqally between differnt areas, arising to what people describe as ‘postcode lotteries’ can cause problems of its own. So I won’t pretend we have it 100% worked out.
We don’t have mefenamic acid over the counter here, either, actually. The restrictions on what is and isn’t available OTC can be pretty different from country to country! It’s a prescription medication, though not hard to get from the GP. Even here, there’s a difference in the cost of medications based on branding. Branded paracetamol or ibuprofen will set you back a few pounds, whereas generic paracetamol can be as little as 13p per packet of 16 tablets. So it pays for us to ask for medicines by their generic name wherever possible; my favourite pharmacy always offers both (and usually tells me if it’s cheaper to get something on prescription or over the counter, even if I have it prescribed), so it’s always worth having a chat with your pharmacist about what the best options are. Pharmacists are a geninely underappreciated and under-used resource and if I could clone my local pharmacist I’d ship him out to all of you.You don’t mention where you live, or where your friend lives (US and Canada? Or somewhere else entirely), so it’s hard to know why the difference is so big. Perhaps the pricing for patent medication is very different between your countries. Or perhaps the medicines aren’t under patent but the ones on offer are branded, and companies are marking up the prices considerably; without knowing the meds it’s hard to know.
9 notes · View notes
archaeologysucks · 6 years
Text
Someone sent me a list of questions about professional field archaeology, and I thought my answers might be helpful or educational for some of you.
1. Can you describe what your work is now and what specific field you are in?
Most of the work I do at present is in the field of Cultural Resource Management (CRM). It is the sort of work the majority of archaeologists working in the US do. I am an archaeological field technician, which means most of my work is done in the field, though there are also office-based archaeology jobs, involving research, analysis, logistics, and report writing. Most of my work involves cultural survey and archaeological monitoring. Cultural survey usually involves walking over an area where development is planned, and either visually scanning the ground surface for artifacts, or else digging holes at regular intervals, and screening the soils to see if there are any artifacts or cultural features within it. Archaeological monitoring involves observing construction crews moving soil with heavy equipment, in case they turn up anything of historical significance.
2. How did you become interested in this career?
I have always been interested in history, as well as in hunting for objects like interesting rocks or fossils. In high school, I had an opportunity to spend a few weeks volunteering on an archaeological excavation in Montana. I enjoyed finding things and learning about the methodology behind the work.
3. Are there any geographic constraints to this career?
Unless you can get a work visa for another country, which is often difficult, you are constrained to working in the country of your citizenship. You may need to travel a great deal for work, and be prepared to jump from job to job, if you want to work full time. I have worked in many parts of the country, but eventually decided to settle in Western Washington. As a trade-off, my work has become more seasonal. Winters are often slow, with little work available. I have not worked outside Washington State in 7 years, but the companies I work for still sometimes send me out of town and put me up in hotels for days or weeks at a time.
4. What schooling or training was necessary for this career?
For a career in archaeology, you need a college degree in archaeology, or in a related field like anthropology or history, plus an accredited field school (usually a few weeks over a summer). I believe there are some two-year archaeology certification in the US, but don't know any specifics on that. Most people I work with have a BA or MA. You need at least an MA to be considered a Professional Archaeologist, and to legally work on some projects.
5. Do you belong to any professional organizations/unions?
I don't. There are some groups trying to organize a union for archaeologists, but it has been an uphill struggle. Many companies I have worked for would blacklist employees if they were caught so much as talking about unions at work.
6. What are the rewards and challenges of this career?
For rewards, I appreciate the flexibility of my work. If I ever want to take a day or a week or even a month off, it's rarely a problem. My employers just ask me to let them know when I am available again. I enjoy the people I work with (in the PNW; this wasn't universally the case when I worked in other parts of the country). I get to work outdoors, and see some beautiful places. For challenges, having a slow season is hard on my finances. I have to plan and budget for the possibility of having little to no work for 3 or 4 months out of the year. Sometimes the weather is unpleasant, and I get cold and wet and muddy. The terrain can be rough, and there are lots of opportunities for injury. The pay isn't great. It's not terrible, but it's low, considering this is a field that requires someone to invest in a specific kind of 4-year degree. Sometimes the work is boring or not very reqarding, especially when I go for days or weeks without finding anything at all. There can be some anxiety involved in not knowing from one week to the next where or whether I will be working.
7. What freedoms/constraints are inherent in this career?
I think the freedoms and constraints are pretty well covered in the rewards and challenges section. One freedom is that you never have to wear uncomfortable or expensive office clothes. All my field clothes were bought second hand, and it's fine if you show up for work with your hair uncombed, looking like a scarecrow, which is great for those mornings when I just want to roll out of bed at the last possible minute. Another constraint would be the difficulty of forming personal relationships with people, when you're on the road all the time, or moving from job to job. Having a home life, romantic relationship, family, regular activities, or pets can be difficult.
8. What is the average beginning salary in this field?
I'm not gonna lie; it's low. My first job in archaeology, 10 years ago, was $12.50/hr, and rates of pay in the eastern part of the US haven't gone up much since then. I never got more than $15/hr working on the east coast. Which is criminal, considering the college debt a lot of archaeologists are forced to take on. On the plus side, if you're living on the road and being put up in hotels, you might not need to pay rent anywhere, or have many bills. You'll be getting per diem, which is a daily amount of money meant to cover your food and the inconvenience of being away from home, and you might get your mileage paid for as well, if you have your own vehicle. I've only had one job that was salaried; everything else was an hourly wage. Now, working in the PNW with 10+ years of field experience, I make $20/hr, and, as I said above, it's not always full time.
9. Do you enjoy your job?
There are times when I do. When the weather is good, and the work is fairly easy, and I'm working with people I enjoy. There are other times when I wake up in the dark and it's raining and I still ache from the day before, when my first thought is "oh god kill me now". With the season of cold and dark upon us, I think more and more about getting into another line of work. Something with a regular paycheck I can count on, and the ability to come home to my own bed every night, and not spend 2+ hours of my day commuting all over the state, where I can work indoors and be warm and dry and clean, and make myself as many cups of tea a day as I want. Don't get me wrong; I thin the work I do is important and worthwhile, and that someone definitely needs to do it, but I think that someone should be paid more, and should possibly not be me.
10. What is the potential highest salary in this field?
I really don't know, off the top of my head, but I'm gonna guess it's not high. I doubt there are any archaeologists making $100k, and I don't think I've ever known anyone who made more than $50k. I did work for a company once whose owner was pretty loaded, but he wasn't an archaeologist; he was a business owner.
11. What advice would you give to someone interested in this field?
Make sure you want it. Go into it with both eyes open. Care deeply about historical preservation. Specialize in something, especially something tech-related, like GIS. Follow archaeology job posting boards, so that you know what jobs are out there, what qualifications they are looking for, and what they pay. Know your value. Be flexible. And have a backup plan, for when the work is slim.
12. What local schools/programs would you recommend to someone interested going into this field?
I don't know any in particular off the top of my head. I got my degrees overseas, and only have the vaguest understanding of how the American higher education system operates. As long as you get a 4-year degree in a relevant field, and attend an accredited field school, you should be good to go. On top of that, consider volunteering at museums, and other history or archaeology related organizations.
Best of luck to you with your future plans!
54 notes · View notes
yessadirichards · 4 years
Photo
Tumblr media Tumblr media Tumblr media
'Downton Abbey' creator turns to the beautiful game     NEW YORK
With global soccer shut down these days, fans desperate for a fix of the beautiful game may find it from a rather unlikely source — the creator of the stately “Downton Abbey.”
Julian Fellowes has created and co-written the new Netflix series “The English Game,” a six-part look at the origins of a onetime British gentleman's game that has become the most popular sport on the planet.
“There are certain sports that cut right through society and appeal to people at every level. And that seems to me to be a wholly good thing,” Fellowes says.
The series is set in 1879 and focuses on the first full-time professional players and how they infused the game with new tactics and passing strategies. But this being a Fellowes project, there's plenty of drama off the pitch, too: the rise of both the working class and women's rights.
Fellowes actually knew little about the origins of soccer when he began the project, but he was aware of it's force firsthand: His son, Peregrine, is a rabid fan of Manchester United and, as a boy, decorated his pillowcases, duvet covers and lampshades with the team's crest. Father and son attended games, and the elder Fellowes soon grew to admire the athleticism of the players.
“When you watch anything — and I do pretty well mean anything — being done superbly, it generates an interest even in the hearts of someone who is not particularly concerned with that subject," he says. “Watching Man U coming down the pitch, running like a sort of Russian ballet, was extraordinary.”
“The English Game" is based on real events and centers on Fergus Suter, a Scott regarded as the first full-time professional. He was lured to the mill town of Darwin in England's Lancashire region to join the local team, the first player to earn a salary for his skill.
It was a time in England when the rules of soccer had been codified by the elite — bankers and lawyers who wore white tie and tails for dinner and considered the game something only gentlemen participated in.
But it was attracting fans across the social spectrum and especially finding root in industrial towns among factory workers. They were challenging the elite not just on the pitch but also in the streets, demanding better treatment, higher wages and unions.
What Fellowes found was that social changes in Britain at the time mirrored the changes in soccer, with each reinforcing each other. “I thought this is kind of playing out in miniature of what was happening in Western Europe on a grand scale.”
On the pitch, working men from Lancashire teams like Blackburn Rovers were increasingly beating teams made up of upper crust Eton College alumni, using speed and passing to beat their better nourished class rivals.
Soccer was also helping industrial towns bind together, creating a sense of community and eventually spurring workers to demand changes together in the way they were treated.
"Here was something that would bind them into a unit, that would bind them into a community," he says. "Most human beings spend their lives trying to feel they belong to something that has value. And here it was just given on a plate.”
Rory Aitken, executive producer of "The English Game," calls Fellowes as much a historian as a drama writer and credits him for unearthing the little-known origin of a sport that has some 4 billion fans.
“It's not just a narrow football story. It's a big, period epic that tells us about the history of the world while telling it though the medium of football. Who would have expected that?” Aitken said.
Fellowes is a busy man of late. In addition, to the new series, he's got “The Gilded Age,” a show about New York City in the 1880s, for HBO, and “ Belgravia,” a drama based on his novel of the same title on Epix. Plus, there's a second “Downton Abbey” film.
“The English Game" is filled with Fellowes' brimming sense of humanity and respect for all sides. He may in real life be a Lord, but that hasn't stopped his sympathy for the working class.
“My philosophy is a simple one, really,” he says. “I believe that most men and women are doing their best. Whatever they have been born to, whatever they've given, they're trying to do their best. Of course, there are some people who are not trying to do their best, but they are very much in the minority.”
He credits his wife, Emma Joy Kitchener, for his hopefulness. “I live with a tremendous optimist. And I think an innate pessimism has been sort of disciplined by her," he says. "She is an optimist about absolutely everything. And I think I have caught it a bit.”
1 note · View note
spankedbyspike · 4 years
Text
The Middle Class all over the World
Losing the Lead
The American Middle Class Is No Longer the World’s RichestThe American Middle Class Is No Longer the World’s Richest
The American middle class, long the most affluent in the world, has lost that honor, and many Americans are dissatisfied with the state of the country. “Things are pretty flat,” said Kathy Washburn of Mount Vernon, Iowa. “You have mostly lower level and high and not a lot in between.”
1 / 9
By David Leonhardt and Kevin Quealy
April 22, 2014
2024
The American middle class, long the most affluent in the world, has lost that distinction.
While the wealthiest Americans are outpacing many of their global peers, a New York Times analysis shows that across the lower- and middle-income tiers, citizens of other advanced countries have received considerably larger raises over the last three decades.
After-tax middle-class incomes in Canada — substantially behind in 2000 — now appear to be higher than in the United States. The poor in much of Europe earn more than poor Americans.
The numbers, based on surveys conducted over the past 35 years, offer some of the most detailed publicly available comparisons for different income groups in different countries over time. They suggest that most American families are paying a steep price for high and rising income inequality.
Although economic growth in the United States continues to be as strong as in many other countries, or stronger, a small percentage of American households is fully benefiting from it. Median income in Canada pulled into a tie with median United States income in 2010 and has most likely surpassed it since then. Median incomes in Western European countries still trail those in the United States, but the gap in several — including Britain, the Netherlands and Sweden — is much smaller than it was a decade ago.
Advertisement
In European countries hit hardest by recent financial crises, such as Greece and Portugal, incomes have of course fallen sharply in recent years.
The income data were compiled by LIS, a group that maintains the Luxembourg Income Study Database. The numbers were analyzed by researchers at LIS and by The Upshot, a New York Times website covering policy and politics, and reviewed by outside academic economists.
You have 1 free article remaining.
Subscribe to The Times
The struggles of the poor in the United States are even starker than those of the middle class. A family at the 20th percentile of the income distribution in this country makes significantly less money than a similar family in Canada, Sweden, Norway, Finland or the Netherlands. Thirty-five years ago, the reverse was true.
LIS counts after-tax cash income from salaries, interest and stock dividends, among other sources, as well as direct government benefits such as tax credits.
Editors’ Picks
The 20 Best TV Dramas Since ‘The Sopranos’
Where 518 Inmates Sleep in Space for 170, and Gangs Hold It Together
How Does Your State Make Electricity?
The findings are striking because the most commonly cited economic statistics — such as per capita gross domestic product — continue to show that the United States has maintained its lead as the world’s richest large country. But those numbers are averages, which do not capture the distribution of income. With a big share of recent income gains in this country flowing to a relatively small slice of high-earning households, most Americans are not keeping pace with their counterparts around the world.
Advertisement
“The idea that the median American has so much more income than the middle class in all other parts of the world is not true these days,” said Lawrence Katz, a Harvard economist who is not associated with LIS. “In 1960, we were massively richer than anyone else. In 1980, we were richer. In the 1990s, we were still richer.”
That is no longer the case, Professor Katz added.
Median per capita income was $18,700 in the United States in 2010 (which translates to about $75,000 for a family of four after taxes), up 20 percent since 1980 but virtually unchanged since 2000, after adjusting for inflation. The same measure, by comparison, rose about 20 percent in Britain between 2000 and 2010 and 14 percent in the Netherlands. Median income also rose 20 percent in Canada between 2000 and 2010, to the equivalent of $18,700.
The most recent year in the LIS analysis is 2010. But other income surveys, conducted by government agencies, suggest that since 2010 pay in Canada has risen faster than pay in the United States and is now most likely higher. Pay in several European countries has also risen faster since 2010 than it has in the United States.
Three broad factors appear to be driving much of the weak income performance in the United States. First, educational attainment in the United States has risen far more slowly than in much of the industrialized world over the last three decades, making it harder for the American economy to maintain its share of highly skilled, well-paying jobs.
Americans between the ages of 55 and 65 have literacy, numeracy and technology skills that are above average relative to 55- to 65-year-olds in rest of the industrialized world, according to a recent study by the Organization for Economic Cooperation and Development, an international group. Younger Americans, though, are not keeping pace: Those between 16 and 24 rank near the bottom among rich countries, well behind their counterparts in Canada, Australia, Japan and Scandinavia and close to those in Italy and Spain.
A second factor is that companies in the United States economy distribute a smaller share of their bounty to the middle class and poor than similar companies elsewhere. Top executives make substantially more money in the United States than in other wealthy countries. The minimum wage is lower. Labor unions are weaker.
Sign up for The Upshot Newsletter
Get the best of The Upshot’s news, analysis and graphics about politics, policy and everyday life.
Sign Up
Advertisement
And because the total bounty produced by the American economy has not been growing substantially faster here in recent decades than in Canada or Western Europe, most American workers are left receiving meager raises.
Finally, governments in Canada and Western Europe take more aggressive steps to raise the take-home pay of low- and middle-income households by redistributing income.
Janet Gornick, the director of LIS, noted that inequality in so-called market incomes — which does not count taxes or government benefits — “is high but not off the charts in the United States.” Yet the American rich pay lower taxes than the rich in many other places, and the United States does not redistribute as much income to the poor as other countries do. As a result, inequality in disposable income is sharply higher in the United States than elsewhere.
Whatever the causes, the stagnation of income has left many Americans dissatisfied with the state of the country. Only about 30 percent of people believe the country is headed in the right direction, polls show.
“Things are pretty flat,” said Kathy Washburn, 59, of Mount Vernon, Iowa, who earns $33,000 at an Ace Hardware store, where she has worked for 23 years. “You have mostly lower level and high and not a lot in between. People need to start in between to work their way up.”
Middle-class families in other countries are obviously not without worries — some common around the world and some specific to their countries. In many parts of Europe, as in the United States, parents of young children wonder how they will pay for college, and many believe their parents enjoyed more rapidly rising living standards than they do. In Canada, people complain about the costs of modern life, from college to monthly phone and Internet bills. Unemployment is a concern almost everywhere.
But both opinion surveys and interviews suggest that the public mood in Canada and Northern Europe is less sour than in the United States today.
“The crisis had no effect on our lives,” Jonas Frojelin, 37, a Swedish firefighter, said, referring to the global financial crisis that began in 2007. He lives with his wife, Malin, a nurse, in a seaside town a half-hour drive from Gothenburg, Sweden’s second-largest city.
Advertisement
They each have five weeks of vacation and comprehensive health benefits. They benefited from almost three years of paid leave, between them, after their children, now 3 and 6 years old, were born. Today, the children attend a subsidized child-care center that costs about 3 percent of the Frojelins’ income.
Even with a large welfare state in Sweden, per capita G.D.P. there has grown more quickly than in the United States over almost any extended recent period — a decade, 20 years, 30 years. Sharp increases in the number of college graduates in Sweden, allowing for the growth of high-skill jobs, has played an important role.
Elsewhere in Europe, economic growth has been slower in the last few years than in the United States, as the Continent has struggled to escape the financial crisis. But incomes for most families in Sweden and several other Northern European countries have still outpaced those in the United States, where much of the fruits of recent economic growth have flowed into corporate profits or top incomes.
This pattern suggests that future data gathered by LIS are likely to show similar trends to those through 2010.
There does not appear to be any other publicly available data that allows for the comparisons that the LIS data makes possible. But two other sources lead to broadly similar conclusions.
A Gallup survey conducted between 2006 and 2012 showed the United States and Canada with nearly identical per capita median income (and Scandinavia with higher income). And tax records collected by Thomas Piketty and other economists suggest that the United States no longer has the highest average income among the bottom 90 percent of earners.
One large European country where income has stagnated over the past 15 years is Germany, according to the LIS data. Policy makers in Germany have taken a series of steps to hold down the cost of exports, including restraining wage growth.
Advertisement
Even in Germany, though, the poor have fared better than in the United States, where per capita income has declined between 2000 and 2010 at the 40th percentile, as well as at the 30th, 20th, 10th and 5th.
Malin Frojelin lives with her two children, Engla, 6, and Nils, 3, in Vallda, Sweden, along with her husband, Jonas. Vallda is about a 30-minute drive from Gothenburg, the second-largest city in the country. 
1 / 11
More broadly, the poor in the United States have trailed their counterparts in at least a few other countries since the early 1980s. With slow income growth since then, the American poor now clearly trail the poor in several other rich countries. At the 20th percentile — where someone is making less than four-fifths of the population — income in both the Netherlands and Canada was 15 percent higher than income in the United States in 2010.
By contrast, Americans at the 95th percentile of the distribution — with $58,600 in after-tax per capita income, not including capital gains — still make 20 percent more than their counterparts in Canada, 26 percent more than those in Britain and 50 percent more than those in the Netherlands. For these well-off families, the United States still has easily the world’s most prosperous major economy.
Rachel Z. Arndt contributed reporting from Mount Vernon, Iowa, and David Crouch from Vallda, Sweden.
The Upshot: The Upshot presents news, analysis and data visualization about politics and policy. It will focus on the 2014 midterm elections, the economy, upward mobility, health care and education, and occasionally sports and culture.
A version of this article appears in print on April 22, 2014, on Page A1 of the New York edition with the headline: U.S. Middle Class Is No Longer World’s Richest.
Order Reprints
|
Today’s Paper
|
Subscribe
0 notes
Text
Religion: Bound by Loving Ties. Jeffrey R. Holland. ACU Sunday Series.
Religion: Bound by Loving Ties. Jeffrey R. Holland. ACU Sunday Series.
https://speeches.byu.edu/talks/jeffrey-r-holland/religion-bound-loving-ties/
True religion, the tie that binds us to God and to each other, not only seals our family relationships in eternity but also heightens our delight in those family experiences while in mortality.
One of my BYU professors of yesteryear—actually quite a few yesteryears—was Edward L. Hart, who wrote the text of a much-loved hymn in the Church. The second verse of that hymn, Our Savior’s Love, reads this way:
  The Spirit, voice
Of goodness, whispers to our hearts
A better choice
Than evil’s anguished cries.
Loud may the sound
Of hope ring till all doubt departs,
And we are bound
To him by loving ties.1
  An omnibus word familiar to us all that summarizes these “loving ties” to our Heavenly Father is religion. Scholars debate the etymology of that word just as scholars and laymen alike debate almost everything about the subject of religion, but a widely accepted account of its origin suggests that our English word religion comes from the Latin word religare, meaning “to tie” or, more literally, “to re-tie.”2 In that root syllable of ligare you can hear the echo of a word such as ligature, which is what a doctor uses to sew us up if we have a wound.
  So, for our purpose today, religion is that which unites what was separated or holds together that which might be torn apart—an obvious need for us, individually and ­collectively, given the trials and tribulations we all experience here in mortality.
  What is equally obvious is that the great conflict between good and evil, right and wrong, the moral and the immoral—conflict that the world’s great faiths and devoted religious believers have historically tried to address—is being intensified in our time and is affecting an ever-wider segment of our culture. And let there be no doubt that the outcome of this conflict truly matters, not only in eternity but in everyday life as well. Will and Ariel Durant put the issue squarely as they reflected on what they called “the lessons of history.” “There is no significant example in history,” they said, “of [any] society successfully maintaining moral life without the aid of religion.”3
  If that is true—and surely we feel it is—then we should be genuinely concerned over the assertion that the single most distinguishing feature of modern life is the rise of secularism with its attendant dismissal of, cynicism toward, or marked disenchantment with religion.4 How wonderfully prophetic our beloved Elder Neal A. Maxwell was—clear back in 1978—when he said in a BYU devotional:
  We shall see in our time a maximum . . . effort . . . to establish irreligion as the state religion. [These secularists will use] the carefully preserved . . . freedoms of Western civilization to shrink freedom even as [they reject] the value . . . of our rich Judeo-Christian heritage.
  Continuing on, he said:
  Your discipleship may see the time come when religious convictions are heavily discounted. . . . This new irreligious imperialism [will seek] to disallow certain . . . opinions simply because those opinions grow out of religious convictions.5
  My goodness! That forecast of turbulent religious weather issued nearly forty years ago is steadily being fulfilled virtually every day somewhere in the world in the minimization of—or open hostility toward—religious practice, religious expression, and, even in some cases, the very idea of religious belief itself. Of course there is often a counterclaim that while some in the contemporary world may be less committed to religion per se, nevertheless many still consider themselves “spiritual.” But, frankly, that palliative may not offer much in terms of collective moral influence in society if “spirituality” means only gazing at the stars or meditating on a mountaintop.
  Indeed, many of our ancestors in generations past lived, breathed, walked, and talked in a world full of “spirituality,” but that clearly included concern for the state of one’s soul, an attempt to live a righteous life, some form of Church attendance, and participation in that congregation’s charitable service in the community. Yes, in more modern times individuals can certainly be “spiritual” in isolation, but we don’t live in isolation. We live as families, friends, neighbors, and nations. That calls for ties that bind us together and bind us to the good. That is what religion does for our society, leading the way for other respected civic and charitable organizations that do the same.
  This is not to say that individual faith groups in their many different forms and with their various conflicting beliefs are all true and equally valuable; obviously they cannot be. Nor does it say that institutional religions collectively—churches, if you will—have been an infallible solution to society’s challenges; they clearly have not been. But if we speak of religious faith as among the highest and most noble impulses within us, then to say that so-and-so is a “religious person” or that such and such a family “lives their religion” is intended as a compliment. Such an observation would, as a rule, imply that these people try to be an influence for good, try to live to a higher level of morality than they might otherwise have done, and have tried to help hold the socio­political fabric of their community together.
  Well, thank heaven for that, because the sociopolitical fabric of a community wears a little thin from time to time—locally, nationally, or internationally—and a glance at the evening news tells us this is one of those times. My concern is that when it comes to binding up that fabric in our day, the ligatures of religion are not being looked to in quite the way they once were. My boyhood friend and distinguished legal scholar Elder Bruce C. Hafen framed it even more seriously than that:
  Democracy’s core values of civilized religion . . . are now under siege—partly because of violent criminals who claim to have religious motives; partly because the wellsprings of stable social norms once transmitted naturally by religion and marriage-based family life are being polluted . . . ; and partly because the advocates of some causes today have marshaled enough political and financial capital to impose by intimidation, rather than by reason, their anti-­religion strategy of “might makes right.”6
  There are many colliding social and cultural forces in our day that contribute to this anti-religious condition, which I am not going to address in these remarks. But I do wish to make the very general observation that part of this shift away from respect for traditional religious beliefs—and even the right to express those religious beliefs—has come because of a conspicuous shift toward greater and greater pre­occupation with the existential circumstances of this world and less and less concern for—or even belief in—the circumstances, truths, and requirements of the next.
  Call it secularism or modernity or the technological age or existentialism on steroids—whatever you want to call such an approach to life, we do know a thing or two about it. Most important, we know that it cannot answer the yearning questions of the soul, nor is it substantial enough to sustain us in times of moral crises.
  Rabbi Lord Jonathan Sacks, formerly Chief Rabbi of the United Hebrew Congregations of the British Commonwealth for twenty-two years, a man whom I admire very much, has written:
  What the secularists forgot is that Homo sapiens is the meaning-seeking animal. If there is one thing the great institutions of the modern world do not do, it is to provide meaning.7
  We are so fortunate—and grateful—that modern technology gives us unprecedented personal freedom, access to virtually unlimited knowledge, and communication capability beyond anything ever known in this world’s history, but neither technology nor its ­worthy parent science can give us much moral guidance as to how to use that freedom, where to benefit from that knowledge, or what the best purpose of our communication should be. It has been principally the world’s great faiths—religion, those ligatures to the Divine we have been speaking of—that do that, that speak to the collective good of society, that offer us a code of conduct and moral compass for living, that help us exult in profound human love, and that strengthen us against profound human loss. If we lose consideration of these deeper elements of our mortal ­existence—divine elements, if you will—we lose much, some would say most, of that which has value in life.
  The legendary German sociologist Max Weber once described such a loss of religious principle in society as being stuck in an “iron cage” of disbelief.8 And that was in 1904! Noting even in his day the shift toward a more luxurious but less value-laden society, a society that was giving away its priceless spiritual and religious roots, Weber said in 1918 that “not summer’s bloom lies ahead of us, but rather a polar night of icy darkness.”9
  But of course not everyone agrees that religion does or should play such an essential role in civilized society. Recently the gloves have come off in the intellectual street fighting being waged under the banner of the “New Atheists.” Figures like Richard Dawkins, Sam Harris, Daniel Dennett, and the late Christopher Hitchens are some of the stars in what is, for me, a dim firmament. These men are as free to express their beliefs—or, in their case, ­disbeliefs—as any other, but we feel about them what one Oxford don said about a colleague: “On the surface, he’s profound, but deep down, he’s [pretty] superficial.”10
  Rabbi Sacks said that surely it is mind-boggling to think that a group of bright secular thinkers in the twenty-first century really believe that if they can show, for example, “that the universe is more than 6,000 years old” or that a rainbow can be explained other “than as a sign of God’s covenant after the Flood,” that somehow such stunning assertions will bring all of “humanity’s religious beliefs . . . ­tumbling down like a house of cards and we would be left with a serene world of rational non-believers,”11—serene except perhaps when they whistle nervously past the local graveyard.
  A much harsher assessment of this movement came from theologian David Bentley Hart, who wrote:
  Atheism that consists entirely in vacuous ­arguments afloat on oceans of historical ignorance, made turbulent by storms of strident self-righteousness, is as contemptible as any other form of dreary fundamentalism.12
  We are grateful that a large segment of the human population does have some form of religious belief, and in that sense we have not yet seen a “polar night of icy darkness”13 envelop us. But no one can say we are not seeing some glaciers on the move.
  Charles Taylor, in his book with the descriptive title A Secular Age, described the cold dimming of socioreligious light. The shift of our time, he said, has been
  from a society in which it was virtually impossible not to believe in God, to one in which faith, even for the staunchest believer, is [only] one human possibility among [many] others.14
  Charles Taylor also wrote that now, in the twenty-first century, “belief in God is no longer axiomatic.”15 Indeed, in some quarters it is not even a convenient option, it is “an embattled option.”16
  But faith has almost always been “an embattled option” and has almost always been won—and kept—at a price. Indeed, many who have walked away from faith have found the price higher than they intended to pay, such as the man who tore down the fence surrounding his new property only to learn that his next-door neighbor kept a pack of particularly vicious Rottweilers.
  David Brooks hinted at this but put it much too mildly when he wrote in his New York Times column, “Take away [the] rich social fabric [that religion has always been,] and what you are left with [are] people who are uncertain about who they really are.”17 My point about “too mildly” is that a rich social fabric, important as that is, says absolutely nothing about the moral state of one’s soul, redemption from physical death, overcoming spiritual alienation from God, the perpetuation of marriage and the family unit into eternity, and so forth—if anyone is considering such issues in a postmodern world.
  In fact, religion has been the principal ­influence—not the only one, but the principal one—that has kept Western social, ­political, and cultural life moral, to the extent that these have been moral. And I shudder at how immoral life might have been—then and now—without that influence. Granted, religion has no monopoly on moral action, but centuries of religious belief, including institutional church- or synagogue- or mosque-going, have clearly been preeminent in shaping our notions of right and wrong. Journalist William Saletan put it candidly: “Religion is the vehicle through which most folks learn and practice morality.”18
  I am stressing such points this morning because I have my eye on that future condition about which Elder Maxwell warned—a time when if we are not careful we may find religion at the margins of society rather than at the center of it, when religious beliefs and all the good works those beliefs have generated may be tolerated privately but not admitted or at least certainly not encouraged publicly. The cloud the prophet Elijah saw in the distance no larger than “a man’s hand”19 is that kind of cloud on the political horizon today. So we speak of it by way of warning, remembering the storm into which Elijah’s small cloud developed.20
  But whatever the trouble along the way, I am absolutely certain how this all turns out. I know the prophecies and the promises given to the faithful, and I know our collective religious heritage—all the Western world’s traditional religious beliefs, varied as they are—is remarkably strong and resilient. The evidence of that religious heritage is all around us, including at great universities, or at least it once was—and fortunately still is at BYU.
  Just to remind us how rich the ambiance of religion is in Western culture and because this is Campus Education Week, let me mention just a few of the great religiously influenced non-LDS pieces of literature that I met while pursuing my education on this campus fifty years ago, provincial and dated as my list is. I do so while stressing how barren our lives would be had there not been the freedom for writers, artists, and musicians to embrace and express religious values or discuss religious issues.
  I begin by noting the majestic literary—to say nothing of the theological—influence of the King James Bible, what one of the professors I knew later at Yale called “the sublime summit of literature in [the] English [language],”21 the greatest single influence on the world’s creative literature for the last 400 years. I think also of what is probably the most widely read piece of English literature other than the Bible: John Bunyan’s Pilgrim’s Progress.
  Five decades after I first read them, I am still moved by the magnificence of two of the greatest poems ever written by the hand of man: Dante Alighieri’s Divine Comedy and John Milton’s Paradise Lost. Certainly the three greatest American novels I read at BYU were Herman Melville’s Moby Dick, Nathaniel Hawthorne’s The Scarlet Letter, and Mark Twain’s The Adventures of Huckleberry Finn—each in its own way a religious text and all more meaningful in my reading of them now than when I was a student on this campus so long ago. So too it is with my encounter with Russian writers, especially Fyodor Dostoyevsky and Leo Tolstoy.
  Then—to name only a handful—you add British giants like George Herbert, John Donne, William Blake, and Robert Browning; throw in Americans like Emily Dickinson, William Faulkner, and Flannery O’Connor; then an American who became British, like T. S. Eliot, and a Briton who became American, like W. H. Auden; and for good luck throw in an Irishman like W. B. Yeats and you have biblical imagery, religious conflict, and wrenching questions of sin, society, and salvation on virtually every page you turn.
  Having mentioned a tiny bit of the religiously related literature I happened to encounter as a student, I now note an equally tiny bit of the contribution that religious sensibility has provoked in the heart of the visual artist and the soul of the exultant musician. [An audiovisual presentation was shown.]
  Brothers and sisters, my testimony this morning, as one observer recently wrote, is that “over the long haul, religious faith has proven itself the most powerful and enduring force in human history.”22 Roman Catholic scholar Robert Royal made the same point, reaffirming that for many, “religion remains deep, widespread, and persistent, to the surprise and irritation of those who claimed to have cast aside [religious] illusion”23—to those, I might add, who under­estimated the indisputable power of faith.
  The indisputable power of faith. The most powerful and enduring force in human ­history. The influence for good in the world. The link between the highest in us and our highest hopes for others. That is why religion ­matters. Voices of religious faith have elevated our vision, deepened our human conversation, and strengthened both our personal and collective aspirations since time began. How do we even begin to speak of what Abraham, Moses, David, Isaiah, Jeremiah, Nephi, Mormon, and Moroni have given us? Or of what Peter, James, John, the Apostle Paul, Joseph Smith, and Thomas S. Monson mean to us?
  It is impossible to calculate the impact that prophets and apostles have had on us, but, putting them in a special category of their own, we can still consider the world-shaping views and moral force that have come to us from a Martin Luther or a John Calvin or a John Wesley in earlier times, or from a Billy Graham or a Pope Francis or a Dalai Lama in our current age. In this audience today we are partly who we are because some 450 years ago, men like Nicholas Ridley and Hugh Latimer, being burned at the stake in Oxford, called out to one another that they were lighting such a religious fire in England that it would never be put out in all the world. Later William Wilberforce applied just such Christian conviction to abolishing the slave trade in Great Britain. As an ordained minister, Martin Luther King Jr. continued the quest for racial and civil justice through religious eloquence at the pulpit and in the street. George Washington prayed at Valley Forge, and Abraham Lincoln’s most cherished volume in his library, which he read regularly, was his Bible—out of which he sought to right a great national wrong and from which, in victory, he called for “malice toward none, with charity for all, with firmness in the right as God gives us to see the right.”24
  So the core landscape of history has been sketched by the pen and brush and word of those who invoke a Divine Creator’s involvement in our lives and who count on the ligatures of religion to bind up our wounds and help us hold things together.
  Speaking both literally and figuratively of a recurring feature on that landscape, Will Durant wrote:
  These [church] steeples, everywhere pointing upward, ignoring despair and lifting hope, these lofty city spires, or simple chapels in the hills—they rise at every step from the earth to the sky; in every village of every nation on the globe they challenge doubt and invite weary hearts to consolation. Is it all a vain delusion? Is there nothing beyond life but death, and nothing beyond death but decay? We ­cannot know. But as long as men suffer these steeples will remain.25
  Of course, those of us who are believers have very specific convictions about what we can know regarding the meaning of those ubiquitous church steeples.
  In that spirit let me conclude with my heartfelt apostolic witness of truths I do know regarding the ultimate gift true religion provides us. I have been focusing on the social, political, and cultural contributions that religion has provided us for centuries, but I testify that true religion—the gospel of Jesus Christ—gives us infinitely more than that; it gives us “peace in this world, and eternal life in the world to come,”26 as the ­scripture phrases it.
  True religion brings understanding of and loyalty to our Father in Heaven and His uncompromised love for every one of His spirit ­children—past, present, and future. True religion engenders in us faith in the Lord Jesus Christ and hope in His Resurrection. It encourages love, forbearance, and forgiveness in our interactions with one another, as He so magnanimously demonstrated them in His.
  True religion, the tie that binds us to God and to each other, not only seals our family relationships in eternity but also heightens our delight in those family experiences while in mortality. Well beyond all the civic, social, and cultural gifts religion gives us is the mercy of a ­loving Father and Son who conceived and carried out the atoning mission of that Son, the Lord Jesus Christ, suturing up that which was torn, bonding together that which was ­broken, healing that which was ill or imperfect, “proclaim[ing] liberty to the captives, and . . . opening . . . the prison to them that are bound.”27
  Because my faith, my family, my beliefs, and my covenants—in short, my religion—mean everything to me, I thank my Father in Heaven for religion and pray for the continued privilege to speak of it so long as I shall live. May we think upon the religious heritage that has been handed down to us—at an incalculable price in many instances—and in so remembering not only cherish that heritage more fervently but live the religious principles we say we want to preserve. Only in the living of our religion will the preservation of it have true meaning. It is in that spirit that we seek the good of our fellow men and women and work toward the earthly kingdom of God rolling forth, so that the heavenly kingdom of God may come.
  May our religious privileges be cherished, preserved, and lived, binding us to God and to each other until that blessed millennial day comes, I earnestly pray in the name of Jesus Christ, amen.
   Jeffrey R. Holland was a member of the Quorum of the Twelve Apostles of The Church of Jesus Christ of Latter-day Saints when this devotional address was given on 16 August 2016 during BYU Campus Education Week.
Click here to download the episode
0 notes
arcticdementor · 5 years
Link
Last year I had an interesting conversation with someone I’ll call the Washington Insider. She asked me why my structural-demographic model predicted rising instability in the USA, probably peaking with a major outbreak of political violence in the 2020s. I started giving the explanation based on the three main forces: popular immiseration, intra-elite competition, and state fragility. But I didn’t get far because she asked me, what immiseration? What are you talking about? We’ve never lived better than today. Global poverty is declining, child mortality is declining, violence is declining. We have access to the level of technology that is miraculous compared to what previous generations had. Just look at the massive data gathered together by Max Rosen, or read Steven Pinker’s books to be impressed with how good things are.
There are three biases that help sustain this rosy view. First, the focus on global issues. But the decrease of poverty in China (which is what drives declining global poverty, because Chinese population is so huge), or the drop in child mortality in Africa, is irrelevant to the working America. People everywhere compare themselves not to some distant places, but to the standard of living they experienced in their parents home. And the majority of American population sees that in many important ways they are worse off than their parents (as we will see below).
Second, the Washington Insider talks to other members of the 1 percent, and to some in the top 10 percent. The top-income segments of the American population have done fabulously in the last decades, thank you very much.
So what has been happening with the well-being of common, non-elite Americans? In my work I use three broad measures of well-being: economic, biological (health), and social.
The most common statistics one sees about economic well-being is the trend in per-capita household incomes. This is not a particularly good way to measure economic well-being for two reasons. First, as households became smaller (because Americans have fewer children), the same wage of the primary breadwinner gets divided by a fewer heads, and that yields an illusion of things getting better. Second, as a result of massive entry of women into the labor force, the typical household today has two bread-winners, compared to a single-wage household of fifty years ago. Furthermore, many households today have even more than two wage-earners, because adult children don’t move away. As a result of both of these factors, the time trajectory of household income yields an overly optimistic view of how well Americans are doing economically.
The pattern is unmistakable: rapid, almost linear growth to the late 1970s, stagnation and decline (especially for unskilled labor) thereafter. Here’s a more detailed breakdown of men’s wages since 1979, broken down by wage percentile (10th is the poorest, 95th is the richest):
Why did this happen? I answer this question in a series of posts, Why Real Wages Stopped Growing (see it in Popular Blogs and Series). The TL;DR answer is that it was a combination of immigration, loss of manufacturing jobs overseas, massive entry of women into the labor force (thus, this factor both inflated household income and, perversely, depressed wages for men), and changing attitudes towards labor. A model incorporating these influences does a pretty decent job of capturing both the turning point of the 1970s and fluctuations afterwards:
Another important indicator is availability of jobs. The jobless rate published by government agencies is not a very useful statistic, because it tells us about short-term fluctuations, and excludes people who gave up on the job market. A better measure is the labor participation curve, especially for men:
An amusing way to spin this bad news was pointed out by one commenter on my previous post. An NBER article by Mark Aguiar and Erik Hurst, “Measuring Trends in Leisure”, optimistically concluded that between 1965 and 2003 “leisure for men increased by 6-8 hours per week” and that “this increase in leisure corresponds to roughly an additional 5 to 10 weeks of vacation per year.” A closer reading of the article, however, shows that this “leisure increase” was driven by a decline in “market work hours”. In other words, all those extra 10 percent of men with higher school or less, who dropped out of the work force since 1970, are simply enjoying their “vacations.”
Panel (a) shows that average stature of native-born Americans grew rapidly until the 1970s, and then stagnated. A real shocker is that for some segments of the population (Black women) it actually declined in absolute terms. Panel (b) shows that there is a clear relationship between economic and biological measures of well-being (it’s further explained in Ages of Discord).
For another health measure, life expectancy, we have a similar situation. Overall, America is losing ground in relative terms (for example, in comparison to robustly growing life expectancies in Western Europe). For some segments of the population the decrease is in absolute terms. Here’s a particularly revealing look at the data:
There is a long-term increase in the age of marriage driven by modernization (top panel), so we are interested in fluctuations around the trend (bottom panel). During the periods of increasing well-being (for example, between 1900 and 1960), average age of marriage tends to drop. Immiseration causes it to rise. In fact, an increasing proportion of people doesn’t marry at all. Many of them stay with their parents, and their earnings help to inflate household income statistics.
We know from the work of Jonathan Haidt and others that one of the most powerful factors explaining personal well-being is social embeddedness. Having a spouse is one of the most fundamental ways of being embedded. But a variety of other indicators, collected by Robert Putnam, shows that Americans are becoming increasingly less connected (I’ve written about it in another post).
In short: a variety of indicators show that well-being of common American has been declining in the last four decades. The technical term for this in the structural-demographic theory is immiseration.
20 notes · View notes
Link
The Center for American Progress (CAP), one of Washington, DC’s most influential liberal think tanks with deep ties to the Obama administration and Hillary Clinton campaign, has proposed a big idea for raising Americans’ wages.
A paper by CAP’s David Madland calls for the creation of national wage boards, tasked with setting minimum wage and benefit standards for specific industries. Fast-food companies, say, would send representatives to meet with union officials and other worker representatives, and hammer out a deal that ensures workers get a fair shake. Same goes for nurses, or retail workers, or home health aides, or accountants.
“Bargaining panels would have 11 members — five representing employers, five representing workers and one representing the government,” Madland explains. “The government representative would be the U.S. secretary of labor or their delegate. Employers would choose employer representatives through the employers’ industry associations.” Employees would be represented by unions, or other worker representatives. The secretary of labor would create separate boards for different industries and occupations, and work with unions and other worker groups to enforce the wage rules once they’re adopted.
This may seem like an extreme idea, an unprecedented government and union intrusion into the free market. But it reflects a model already gaining steam in some liberal states (like New York and California), and which owes a lot to policies in Europe. It’s the latest sign that pro-labor voices in America are looking to counterparts in France, Germany, and elsewhere across the Atlantic for signs of how to revive the labor movement and get the working class’s wages rising again.
And while ideas like wage boards and giving workers spots on corporate boards may seem pie-in-the-sky today, they could easily become part of the next Democratic president’s agenda, or become law in left-leaning states even before 2020.
Labor unions in America today are in crisis. In the mid-1950s, a third of Americans belonged to a labor union. Now, only 10.7 percent do, including a minuscule 6.4 percent of private sector workers. The decline of union membership explains as much as a third of the increase in inequality in the US, has caused voter turnout among low-income workers to crater, and has weakened labor’s ability to check corporate influence in DC and state capitals.
The future for traditional unions looks so bleak that a growing number of labor scholars and activists are coming to the conclusion that the US model, which relies on individual workers in individual workplaces getting together and organizing on their own, is dead and can’t be revived. What’s needed, they argue, is a more national or industry-wide approach to supplement or replace the old model of individual workplace-level organizing.
“In 2016, we had the most pro-labor president since the 1960s, the most pro-labor secretary of labor since [FDR’s Secretary] Frances Perkins, an economy with shrinking unemployment and rising wages — and yet we lost a quarter-million union members in the United States,” says David Rolf, president of SEIU 775, a local union representing home care workers in Washington and Montana. “We need to be trying everything.”
The solution, Rolf and others have come to believe, is to look to strategies that have worked abroad. Most European countries still have far greater levels of union coverage than the US. As of 2013, more than two-thirds of workers in Denmark, Sweden, and Finland were union members. In France and Austria, a minority of workers are in unions, but 98 percent are covered by collective bargaining contracts.
Not coincidentally, none of those countries relies as heavily on workplace-by-workplace organizing as the US does. In its stead, they use:
Wage boards setting minimums for whole industries or occupations, like the ones Madland proposes.
Works councils, which are committees elected by workers in their workplaces meant to serve as a vehicle to register concerns and resolve disputes with management, even in workplaces that are not union-organized.
Codetermination, a system in which workers have the ability to elect members to the company’s corporate board, giving them a voice in the company’s high-level decision-making.
Union-administered unemployment insurance, which gives workers a reason to join unions and pay dues even if their specific workplace isn’t organized with a given union.
Since the 2016 election, liberals and the Democratic Party have increasingly embraced a variety of European tax and spending policies, from single-payer health care to tuition-free college. What writers like Madland are suggesting is that the party should also grow more ambitious, and draw more inspiration from Europe, when it comes to labor and work issues. And it’s a suggestion that the American left and worker movement appear ready for, given the surge in teacher strikes and walkouts in recent months.
“The 20th-century model is dead. It will not come back,” Rolf says. “We need fundamentally different ideas of how to build a labor movement.”
Sally Field launches a union drive. 20th Century Fox
Before we get into all these exciting ideas percolating among labor thinkers and organizers about how to change the way American unions and labor relations work, let’s first review how unions in America do work.
The model of unionizing that dominates American labor, familiar from movies like Norma Rae, has been in place since the 1935 National Labor Relations Act. For a union to be formed, at least 30 percent of workers in a workplace petition for a union election. The National Labor Relations Board sets a time and place for the election to be held. If a majority of workers vote to be represented, then they’re all unionized. Sometimes, as happened at Vox Media, companies will voluntarily recognize a union for which a majority of employees have expressed support.
Because unionization happens in individual companies and workplaces, the system is known as “enterprise-level” bargaining. And if you’re covered by an enterprise-level union contract, the system works pretty well. Unionized workers in the US enjoy significantly higher wages and better benefits than nonunion workers, and have greater recourse if they’re being mistreated by their employer.
The problem is that as unions shrink, fewer and fewer people get those benefits — and that’s partly due to the structure of the enterprise-level bargaining system itself. “It creates all these perverse incentives for employers to oppose workers trying to join the union,” says Madland.
Unions get higher pay for their members by demanding money that would otherwise go to shareholders and executives, and so the latter have every reason to fight union drives.
But according to Princeton economist Henry Farber and Harvard sociologist Bruce Western, an even bigger reason for the decline of unions than corporate resistance to organizing drives is that unionized companies in the US have added fewer jobs over time than their nonunion counterparts.
The slower growth has a few causes: Unions were most successful in now-stagnating or shrinking industries like manufacturing and transportation; investors are less willing to put money into firms where unions capture some of their profits; and unions increase labor costs for employers, who respond by hiring fewer workers. Western and Farber found that unionized firms’ slower growth accounted for most of the decline in union membership between the 1970s and ’90s.
But workers in most European countries, and some other rich countries outside the US, have figured out an ingenious way around this. Unions there bargain not at the company level but at the sector level — negotiating for all workers in an entire industry rather than just one company or workplace.
In Sweden, for example, bargaining takes place at three levels: nationally for all industries, between the national union confederations and an association representing all employers; nationally, for specific industries, between the relevant unions and employers; and locally among individual companies. For the vast majority of workers, wages are set at a combination of the three levels, with only very few having deals set primarily at the company level.
Because every company covered by the national deals has to abide by the same pay and benefit rules regardless of how many of their employees are union members, those companies have less incentive to discourage union membership among their workforce. Firms with more union members don’t have any competitive disadvantage relative to firms with fewer: They’re all paying the same wages and offering the same benefits. And employment growth doesn’t necessarily vary among firms based on how many workers are in unions, so there’s no reason for union membership to decay as firms with more union members do worse.
[embedded content]
Allowing a union of fast-food workers to reach an agreement with restaurant owners that gets uniform benefits for everyone might seem impossible in the US. And at the national level, it probably is. But the recent victorious fight for a $15 minimum wage in New York offers a path to sectoral bargaining at the state level.
Organizers there achieved a $15 minimum wage for fast-food workers by convening a wage board. Wage boards have the authority to mandate pay scales and benefits for whole industries, after consultation with businesses and unions. That’s an awful lot like how European countries implement sectoral bargaining. (The New York effort helped inspire Madland’s nationwide proposal.)
In an influential article in the Yale Law Journal, Kate Andrias, a law professor at the University of Michigan and an Obama administration veteran, argued that the “Fight for $15” campaign is, effectively, a sectoral bargaining effort meant to raise standards for a whole segment of low-wage workers regardless of their specific employer.
“I think states can move forward with wage board approaches even absent federal legislative reform,” Andrias told me.
The fact that this can happen at the state level is crucial. Even modest labor law reform efforts like the WAGE Act — which would have authorized tougher penalties for employer violations of existing worker-protection laws — have languished in Congress. The National Labor Relations Act sharply limits states’ abilities to regulate labor organizing. But wage boards are totally kosher, and at least six states — New York, Massachusetts, North Dakota, California, New Jersey, and Colorado — have laws authorizing them.
Andrias is hardly alone in urging a move to sectoral bargaining. University at Buffalo law professor Matthew Dimick helped popularize the idea even before Fight for $15 took off in a paper called “Productive Unionism.” In another report released by the Center for American Progress in fall 2016, Madland called for “transforming unions from individual firm-level bargaining units into organizations or structures … that negotiate for higher wages and benefits across an entire industry or sector.” Columbia law professor Mark Barenberg wrote a report for the Roosevelt Institute in 2015 urging the same.
And in a way, sectoral bargaining is a natural extension of “alt-labor” approaches that have become popular in the labor movement in the past decade, which put less emphasis on traditional workplace organizing and more on building other groups to represent workers’ interests — like “worker centers,” which provide services to low-wage, often immigrant workers in cities and advocate for policy changes on their behalf. Those groups can push for policy changes, like minimum wage hikes, that effectively set a new labor standard for a whole industry.
“The increasing attention on sectoral bargaining is new, but it’s also part of a broader trend of experimentation that has been going on for many years, as people worried about the decline in union membership look for better ways for organizing groups to grow both their membership pools and their revenue streams,” Shayna Strom, a senior fellow at the Century Foundation and Obama administration veteran, notes.
“Sectoral bargaining is certainly getting more attention in legal academic and labor law policy debates,” Benjamin Sachs, a professor at Harvard Law School and former practicing labor lawyer, says. “The way I would think about it is that there’s an existential panic about what will happen to the labor movement. That’s not new, it’s just getting worse. … If we need unions for economic and political equality as I think we do, we have to do something to stop that downward spiral.”
Lärarnas A-kassa, the unemployment fund of the Swedish teachers union, Lärarförbundet. Lärarnas A-kassa
While sectoral bargaining could offer US unions a way out of the abyss, it has its limitations. Ninety-eight percent of French workers may be covered by some kind of bargained contract, but only 7.7 percent of French people are in a union, an even smaller share than in the US. The unions negotiate deals covering the vast majority of workers, but because those workers are covered whether or not they join, there’s little incentive to sign up and pay dues.
The low membership means unions can’t always negotiate the best deals — a significant share of sectoral contracts in France specify minimum wages lower than France’s legal minimum, meaning they have no practical effect. It also hampers unions’ ability to know what workers really want, makes them reliant on the government to “extend” deals, and weakens the unions’ financial standing because few members are around to pay dues. And it’s left the French unions less able to resist reform, like President Emmanuel Macron’s laws meant to move the country away from sectoral bargaining and toward US-style enterprise bargaining.
“Sectoral bargaining creates a free-rider problem even bigger than our current free-rider problem at the enterprise level, because all workers benefit from the higher wages that are negotiated,” Madland says. “So you have a strong disincentive to pay dues.”
But there’s a surprisingly simple plan to get around this, proposed by Dimick, the professor at the University at Buffalo School of Law. Unions could run the unemployment insurance system using subsidies from the government. That, known as the “Ghent system” after the Belgian town where it originated, is a key part of how Sweden, Denmark, Finland, and Belgium have achieved the highest union membership rates in the developed world.
The system emerged almost by accident. “Back before there was any unemployment insurance, unions just did it on their own as a mutual aid function,” Dimick says. When the depression hit and unions lacked the funds to keep paying out benefits, “State governments came to their rescue by subsidizing them. It was an easy fix to the problem of unemployment rather than enacting wholesale government insurance.”
The result was that many countries were left with totally voluntary unemployment systems. In the US, unemployment insurance is funded through taxes on employers jointly administered by the federal government and states. Participation is mandatory.
In other countries, you were required to actively walk into a union office and sign up in order to receive benefits if you lost your job. That put workers in close contact with unions and encouraged them to join; in some countries, union members are also given discounts on unemployment insurance. It’s quite rare for people to sign up for unemployment benefits but not join the union administering them.
Over time, a number of countries, like Norway and France, junked this system in favor of mandatory unemployment insurance. But countries that kept it, like Denmark and Finland, have seen extremely high union membership as a result. Their unions have also been able to do sectoral bargaining with less reliance on government; Nordic governments don’t “extend” contracts as happens in France, as unions can just cut deals themselves using their huge membership as leverage.
And the Ghent system really seems to be what made the difference. A simple comparison between Sweden, where unions run unemployment insurance, and Norway, which abandoned this system, shows that in Sweden, union membership rates kept growing for most of the 20th century.
In Norway, they lagged behind. Since Norwegian unions stopped administering unemployment, Western, the Harvard sociologist, writes, “Swedish union density has persistently exceeded the Norwegian by 20 to 30 percentage points.” Oxford political scientist Bo Rothstein, similarly, has found that adopting a Ghent system leads an additional 20 percent of the workforce to join a union.
That number implies that if the US could turn unemployment insurance over to unions, they could see membership triple from 10 percent to 30 percent, a change that would dramatically transform American politics.
There’s no way that could happen at the federal level with Republicans in charge. That’s where Dimick’s cleverest idea comes in: He thinks that progressive states like California could adopt Ghent systems all by themselves.
The Social Security Act gives states some autonomy in setting up unemployment insurance systems, and Dimick argues that a Ghent system would be an acceptable way for states to implement it. Everyone would be able to collect unemployment insurance, but those who don’t sign up for a union would get fewer unemployment benefits than those who do join.
Most labor scholars and activists I talked to were enthusiastic about the idea of letting unions administer unemployment at the state level. “Independent of union density considerations, there are reasons to believe such an approach would be valuable,” Andrias, of the University of Michigan, says. “As countless union training programs demonstrate, worker organizations can run extremely effective programs to benefit working people.”
The big limitation here, Harvard Law’s Sachs points out, is that the Department of Labor would have to sign off on whatever agency a state wants to have administer its unemployment insurance program. That’s unlikely in the Trump years. But states could also raise their own funds and set up a system without relying on federal unemployment insurance funds.
The Century Foundation’s Strom notes that both a US Ghent approach and sectoral bargaining would need some kind of avenue for workers to voice concerns about their work.
“Sectoral bargaining or a Ghent-like model seem like promising ways to grow the union movement, but because the whole point is to move beyond bargaining at the enterprise level, the focus would no longer be on workers’ struggles for respect at their specific workplaces,” she says. “I think sectoral bargaining would need to be paired with some kind of mechanism for workers to have more voice in the workplace — otherwise we would be losing out on one of the important roles that unions play today.”
A Ghent system would extend benefits to “gig economy” workers who currently lack them. Mark Ralston/AFP/Getty Images
Rolf, the Seattle SEIU president, endorsed the idea of a Ghent system in a paper he wrote for the Aspen Institute. But in partnership with the pro-labor tech billionaire Nick Hanauer, he’s proposed something even more ambitious: “Shared Security Accounts,” a system in which employers would pay for workers to have vacation, sick leave, health insurance, and 401(k) matching benefits that are portable and travel with the worker to whatever job they take.
Rolf now sees this plan, which is designed to respond to the rise of quasi-employers like Uber and TaskRabbit, as a way the Ghent system could come to America — if the security accounts were administered directly by unions.
“We intended to replace the permanently employment/firm-based benefits framework with a new framework that’s more worker-centric,” Rolf told me. As the plan developed, “we took more pains to spell out the idea of a workers’ organization being at the center.”
The benefits Rolf envisions don’t stop with unemployment insurance but also include health, retirement, paid leave, and more. But that could make the system more attractive to workers, not less, and lead more people to enter the orbit of labor unions.
As labor continues to lose membership, sectoral bargaining and the Ghent system are far from the only solutions leaders are considering.
Janice Fine, a political scientist at Rutgers, has proposed something else unions could do: enforce labor laws. Inspectors from the government are often short-staffed and underfunded, and many undocumented immigrants in low-wage jobs are understandably skeptical of cooperating with government authorities of any kind. So Fine has proposed that state and local governments — even the federal government if it wants — contract out to unions and other worker organizations to keep an eye on employers and report abuses.
“There are so many examples of business and the state working together,” Fine told me. “Professional associations set standards all the time. What makes this idea so shocking to people is not that it’s never been done. It’s that it’s worker organizations.”
Baruch College sociologist Hector Cordero-Guzmán suggests that job training (perhaps subsidized by the government) could also be a hugely fruitful activity for unions and other worker organizations, including “alt-labor” groups like worker centers.
Employers often complain that it’s not worth their while to train workers, since they could just be poached by rival companies. Outsourcing the job to unions helps solve that problem. “That employer collective action poaching problem is well-known,” Cordero-Guzmán notes. “When I saw a tweet about Ivanka Trump meeting with Germans about job training, I thought, did they tell her how the training is run through unions? They work with employers to learn what the training needs are.”
In a sense, labor law experts and activists are proposing throwing everything at the wall to see what sticks. The prevailing impression from talking to them is that the total demise of private sector unions in the US is too close at hand to do anything other than try absolutely everything.
“It’s so frustrating to hear all these single-bullet theories,” Rolf says. “Someone says we need card check, someone says striking needs to be a civil right. The reality is that the hour is too late for single bullet theories. What if we’re wrong?”
Still, it’s remarkable how many different people have all converged on something like a European approach to unionism. As Madland put it, “This feels like an idea whose time has come.”
Original Source -> The emerging plan to save the American labor movement
via The Conservative Brief
0 notes
usedcarexpertguide · 7 years
Link
On initial impressions it may appear like the only real distinction in between cacao and cocoa is the spelling. There’s a bit more to it than that …
What is cacao?
Cacao can describe any of the food derived from cacao beans– the seeds or nuts of the cacao tree. These consist of cacao nibs, cacao butter, cacao mass or paste and (most likely the most typical) cacao powder
Cacao v cocoa powder.
Raw cacao powder is made by cold-pressing unroasted cocoa beans. The procedure keeps the living enzymes in the cocoa and removes the fat (cacao butter).
Cocoa looks the very same but it’s not. Cocoa powder is raw cacao that’s been roasted at high temperatures. Unfortunately, roasting changes the molecular structure of the cocoa bean, reducing the enzyme material and reducing the overall nutritional value.
The studies that possess chocolate’s incredible health advantages are likely not describing your typical store-bought chocolate bar (damn deceptive researchers). The chocolate that they’re referring to has properties closer to raw cacao.
Exactly what are the health benefits of cacao?
Cacao powder is understood to have a higher antioxidant material than cocoa and has actually been linked to a range of advantages. These substances do not look like sweet supermarket cocoa and are extremely similar to raw cacao in kind.).
These studies have actually revealed that the substances:.
Lower insulin resistance. Protect your nervous system: Cacao is high in resveratrol, a potent anti-oxidant also discovered in red wine, understood for its ability to cross your blood-brain barrier to help protect your nervous system. Shield nerve cells from damage. Cut your risk of stroke. Lower high blood pressure. Minimize your threat of heart disease: The antioxidants discovered in cacao help to maintain healthy levels of nitric oxide (NO) in the body. Although NO has heart-benefiting qualities, such as unwinding capillary and lowering blood pressure, it also produces toxic substances. The anti-oxidants in cacao reduce the effects of these contaminants, protecting your heart and avoiding illness. Guard against contaminants: As a potent anti-oxidant, cacao can fix the damage brought on by free radicals and may minimize the threat of certain cancers. In reality, cacao contains far more anti-oxidants per 100g than acai, goji berries, and blueberries. Antioxidants are accountable for 10 percent of the weight of raw cacao. Increase your state of mind: Cacao can increase levels of particular neurotransmitters that promote a sense of well-being. When we experience deep feelings of love– phenylethylamine– is discovered in chocolate, and the very same brain chemical that is launched. Provide minerals: Magnesium, iron, potassium, calcium, zinc, manganese and copper. If cacao is more beneficial than cocoa since it’s raw, exactly what happens when we cook it?
Great question and we’re pleased you asked. There is no current research on whether consuming raw cacao ruins its antioxidant level, making it more comparable to its heated and processed cousin cocoa. BUT, we figure beginning with the product in its raw form, has to be more helpful than starting with an already heated up and processed comparable.
Let’s end with a fascinating bit.
Research reveals that dairy prevents the absorption of anti-oxidants from raw cacao.
If you’re making a cacao shake you’re much better off using a non-dairy milk, such as almond or coconut, in order to reap all of the antioxidant benefits. Reality!
Another reality: Did you know you can consume chocolate on our I Quit Sugar: 8-Week Program?
One concern I get a lot is – what is a superfood? Surely they’ve just hyped up versions of routine food.
Well, not quite … Superfoods are simply that – foods which contain substantially greater quantities of anti-oxidants, vitamins, minerals, and other health-boosting, anti-aging, disease-fighting goodies. Some are daily whole foods that you’ll likely have actually attempted before (believe broccoli, blueberries; even the modest spud). Others are more unique, grown in the jungles of Peru and selected by Amazonian warriors (okay, perhaps not).
To help you out in your mission to dominate, or at least explore, the remarkable world of superfoods, every couple of weeks I’ll be spilling the (cacao) beans on a superfood of your option. You’ll get the low down on what it is, why you should be eating it, and some fast and easy methods to do so (even for the non-chefs among).
UP: CACAO – THE AMAZONIAN ANTIOXIDANT KING.
Raw cacao is rather different from the common “Cocoa” the majority of us grew up with in our Afghan biscuits. Cacao (pronounced “cu-COW”) refers to the Theobroma Cacao tree from which Cocoa is obtained, and is utilized when referring to unprocessed versions of the cacao bean.
Typical cocoa powder and chocolate have actually been chemically processed and roasted, which ruins a large quantity of the anti-oxidants and flavanols (the things that keep you healthy and young). A recent research study suggested that in between 60% and 90% of the initial antioxidants in cacao are lost through typical “Dutch processing”. Dutch processing was originally developed in the early 19th Century to lower the bitterness, darken the colour, and create a more mellow flavour to chocolate, however regrettably likewise removed a bunch of the goodness.
Non-organic cocoa (and non-organic chocolate) has also been dealt with greatly with poisonous pesticides and fumigation chemicals and might contain genetically customized (GMO) items.
Oxfam estimates that over 70% of the world’s cocoa is grown by indigenous neighborhoods who are paid such a low wage that poverty is extensive if that wasn’t enough. In some circumstances, child servants are utilized, required to take part in harmful work such as using machetes and using harmful pesticides. Huge incentive to grab a bar of relatively traded chocolate when your next yearning strikes!
Raw Organic Fairly Traded Cacao, on the other hand, has plentiful benefits, so you can add it to your diet plan with no regret, just good old chocolatey deliciousness.
5 BENEFITS OF RAW ORGANIC CACAO. 1. 40 Times the Antioxidants of Blueberries.
Raw Organic Cacao has over 40 times the anti-oxidants of blueberries. ORAC ratings measure the ability of antioxidants to take in complimentary radicals (that come from pollution and toxins in our environment), which trigger cell and tissue damage and can lead to illness such as cancer.
2. Highest Plant-Based Source of Iron.
Cacao is the greatest plant-based source of iron known to guy, at a massive 7.3 mg per 100g. Keep in mind the iron in cacao is non-heme (as is all plant-based iron), so to get the maximum benefits you’ll want to combine it with some vitamin C. Think oranges, kiwifruit, superfoods like binge or Camu Camu (which have 40x more vitamin C than oranges), or try out my Choc Orange Smoothie dish for a Jaffa-tasting throwback.
3. Full of Magnesium for a Healthy Heart & Brain.
Raw Organic Cacao is also one of the greatest plant-based sources of magnesium, the most lacking mineral in the Western world. Magnesium is important for a healthy heart and helps turn glucose into energy allowing your brain to work with laser-sharp clearness and focus. The reason that you might rely on a bar of chocolate during an all-nighter at your desk!
4. More Calcium Than Cow’s Milk.
Raw Organic Cacao has more calcium than cow’s milk would you believe, at 160mg per 100g vs the only 125mg per 100ml of milk. Time to switch out the trim latte for a couple of squares of dairy free raw chocolate.
5. A Natural Mood Elevator and Anti-Depressant.
Cacao is an excellent source of four clinically proven bliss chemicals – serotonin, dopamine, phenylethylamine and anandamide. These neurotransmitters are connected with cosy feelings of wellness, happiness, and can even alleviate anxiety. A natural, healthy, tasty (and legal) method to get your pleased buzz on.
4 WAYS TO USE RAW ORGANIC CACAO. 1. Brew Up A Hot (or Cold) Chocolate MIlk.
Add 1 Tbsp of raw cacao powder to a mug, pour in 1c of warmed plant-based milk, and include 1-2 Tsp of natural organic unprocessed sweeteners such as yacon syrup, agave syrup, coconut nectar, coconut sugar, or maple syrup. Or for a very simple version, attempt my Warming Hot Cacao Chocolate dish.
For a cold choccie milk, add 1 Tbsp of warm water to the raw cacao powder and sweetener initially to dissolve, then include 1c of cold milk and a few ice (or attempt this Chocolate Milk dish).
Keep in mind: some studies have revealed that dairy items obstruct the absorption of antioxidants and calcium in cacao, so save the cow’s milk for the calfs.
2. Whizz Into a Smoothie.
Add 1-2 Tbsp of raw cacao powder or nibs to your routine shake. Or experiment with our Rich Chocolate Smoothie or Choc Orange Jaffa Smoothie for an included vitamin C increase. Spray raw cacao nibs on top for crunch element and to make it look all pretty when you’re done.
3. Rip Open a Bar.
No cooking here – just get a bar of raw organic chocolate, burglarize squares, and serve with some natural nuts, dried fruit, natural tea, and a lot of love.
4. “Bake” a Raw Brownie.
Attempt your hands on our Raw Brownie, which includes cacao powder and cacao nibs, and will make sure to quench any chocolate cravings in a second (100% guilt complimentary). It’s likewise gluten totally free, wheat totally free, sugar-free, dairy free, vegetarian, vegan and paleo, so everyone’s invited to the party.
THIS WEEK’S CHALLENGE:.
Add some raw natural reasonable trade dairy-free cacao into your life this week. Lads, even you can do this one (see concepts # 1, # 2 and # 3).
Get your healthy chocolate on now.
Cocoa powder is raw cacao that’s been roasted at high temperature levels. These compounds don’t look like sugary grocery store cocoa and are very comparable to raw cacao in kind.).
Antioxidants are responsible for 10 per cent of the weight of raw cacao. There is no present research on whether or not eating raw cacao ruins its antioxidant level, making it more similar to its heated and processed cousin cocoa. Raw Organic Cacao has over 40 times the antioxidants of blueberries.
from Raw Organic Powder via Cacao Vida Cited From Honey Guard
0 notes
Text
Where Millennials Live Alone—and Where They’re Still Crashing With Mom and Dad
PeopleImages/Getty Images; urfinguss/iStock; realtor.com
The kids are not all right—or so the click-bait headlines would lead you to believe. There are countless stories about those flighty millennials who job-hop every year, are crazy-obsessed with their iPhone phablets and shell out too much on Instagrammable avocado toast or kale smoothies to move out of their parents’ basements and (gasp!) pay their own rent.
There’s more than a hint of truth to that last part. About 15% of 25- to 35-year-olds were still crashing with their folks in 2016, according to a Pew Research study. And that leaves 85% either cramming into apartments with friends or living solo.
The young-at-heart data team at realtor.com® decided to dig into these numbers. As it turns out, where millennials are living plays a big role in whether they are most likely to live alone, as opposed to with their folks. And there’s a lot of variation across the country, we learned.
“We definitely see a larger percentage of millennials living at home at an older age than previous generations,” says Jason Dorsey, president of the Center for Generational Kinetics, a millennial research firm based in Austin, TX. “They hit the Great Recession, so it’s taking them longer to financially recover. They had a tough job market from the start. And there’s been quite a lot of wage stagnation.”
Adding to the generational woes: Millennials have record amounts of student debt that needs to be paid off. It’s yet another factor that has helped push up the median age of first-time homeowners to 32 in 2016, according to the National Association of Realtors®.
“It’s more socially acceptable now to delay marriage, kids, and a home,” Dorsey says. “There’s not the expectation that you would have bought your own home by age 30.”
So where exactly are millennials living on their own (without roommates or romantic partners)? And where have they flown back to—or never left—the nest? To figure out it out, realtor.com’s data team analyzed 2015 U.S. Census Bureau data on 18- to 34-year-olds in the largest metros. We also added in rental prices for one-bedroom apartments from the rental website Apartment List and realtor.com median home list prices to give you an idea of the local housing markets.
Ready? Let’s start with where millennials are most likely to live solo.
Where millennials live alone
Claire Widman for realtor.com
1. Austin, TX
Percentage of millennials living alone: 11.2% Median rent for a one-bedroom apartment: $1,130 Median home list price: $391,900
It’s no surprise that millennials are moving en masse to funky Austin, one of the most dynamic metros in the United States. And even if they are living on their own, don’t expect them to be lonely. The city is praised for its great food (hello, breakfast taco!), the arts and tech festival/empire of South by Southwest, as well as its thriving entrepreneurial communities.
What is shocking is how many of Austin’s millennials are making it on just one income. The median price of an apartment is more than $1,000—and it gets higher the closer you get to the city center.
One reason that many millennials can afford Silicon Hills, as the Texas capital is known, is due to the influx of tech startups and other related firms. The alluring combo of higher incomes and a lower cost of living has led many to choose the city over other tech hubs.
“More and more young people have higher-salary jobs based out of the [San Francisco] Bay Area, Chicago, or New York, and telecommute from Austin, because of the quality of life,” says local real estate broker Mark Strub of Strub Residential. “It really is that cool.”
2. Omaha, NE
Percentage of millennials living alone: 10.4% Median rent for a one-bedroom apartment: $759 Median home list price: $269,700
Yes, Nebraska—you got a problem with that? The city offers affordable rents for those just starting out, and even buying a home is within reach. Plus it boasts thriving arts, restaurant, and indie music scenes. It’s home to Warren Buffett and his Berkshire Hathaway company, so we’re not exactly talking about the middle of nowhere.
The city even has a decent coolness quotient. Local indie rock bands like Cursive have produced albums with Saddle Creek Records, a homegrown record label founded in part by Omaha-native Conor Oberst of Bright Eyes. The city’s craft beer industry is also gaining momentum, hailed by Thrillist as one of the “10 Untapped Beer Cities Poised to Blow Up.”
3. Milwaukee, WI
Percentage of millennials living alone: 10.4% Median rent for a one-bedroom apartment: $719 Median home list price: $234,700
Plenty of the roughly 27,000 students who attend the University of Wisconsin-Milwaukee stick around after graduation. And why not? Milwaukee offers many of the same amenities as nearby Chicago (about an hour and a half away), and the housing is just a fraction of the price.
For example, a brand-new, 1,200 square-foot apartment in downtown Milwaukee runs $1,500 to $1,700 a month in rent, says local Realtor® Betsy Wilson Head of Realty Executives. Those a bit more flush with cash can purchase a home in good shape for $150,000 to $175,000.
Plus, millennials don’t need to break the bank to have a good time. Idyllic Lake Michigan supports a large sailing community. The Milwaukee Arts Museum is one of the largest in the country, with nearly 25,000 pieces of art. And there are tons of free activities going on around town.
“Every night in summer, [there’s a] free concert somewhere,” says Wilson Head.
4. Pittsburgh, PA
Percentage of millennials living alone: 10.2% Median rent for a one-bedroom apartment: $743 Median home list price: $175,000
Once the prime exemplar of the decline in cities in the Rust Belt, Pittsburgh has made a big comeback in recent years—especially among savvy twentysomethings. They’ve helped propel Steel Town into a new era of prosperity, driven by the growing tech industry and management services.
The city has new art spaces, parks, bike trails, restaurants, bars, and social events, while maintaining the best parts of its old, industrial vibe. Plenty of historic factories have been renovated into reasonably priced housing with the authentic urban, loft vibe that many millennials adore. House party!
5. Albany, NY
Percentage of millennials living alone: 10.1% Median rent for a one-bedroom apartment: $870 Median home list price: $269,900
New York’s state capital has embraced the tech industry, attracting companies like IBM and GlobalFoundries. This has helped retain local university graduates and lure millennials from other metros. That influx of young folk has laid the groundwork for a burgeoning cultural scene that has repurposed formerly abandoned industrial districts and launched a downtown renaissance.
“The reason why Albany is so attractive is because it’s affordable,” says local real estate broker Anthony Gucciardo of the Gucciardo Real Estate Group. Three-bedroom, two-bathroom houses rent for about $2,000 a month.
“The only people here who are likely to have roommates are those still in college,” he says.
Rounding out the top 10 cities where millennials are most likely to live alone are Indianapolis; Dayton, OH; Cleveland; New Orleans; and Kansas City, MO.
Now, ready for home-cooked meals? Let’s look at where millennials are most likely to shack up with Mom and Dad.
Where millennials live with parents
Claire Widman for realtor.com
1. McAllen, TX
Percentage of millennials living with parents: 51.8% Median rent for a one-bedroom apartment: $620 Median home list price: $189,300
There are two main reasons why millennials stick around their family abodes in McAllen, which sits on the U.S.-Mexico border: They don’t make enough money to move out, and even if they could, their families may not want them to. Unlike larger cities in the Lone Star State, the area lacks good-paying, professional jobs. That makes it hard to afford to live on one’s own.
Plus, many of the city’s close-knit families prefer to pool limited resources by living together under one roof, until major life events like marriage or childbirth.
2. Oxnard, CA
Percentage of millennials living with parents: 45.8% Median rent for a one-bedroom apartment: $1,210 Median home list price: $699,000
The sky-high prices in California make it hard for just about everyone, regardless of their age, to make it on just one income. What puts Oxnard on this list is that twentysomethings are simply fleeing because it doesn’t have enough high-paying jobs to keep up with the increasing home prices.
However, this bucolic surf town is within reach of western Los Angeles—about an hour and a half away by car or train. That means millennials might be able to commute to the City of Angels a few days a week and then come home to dear old Mom and Dad.
3. El Paso, TX
Percentage of millennials living with parents: 45.6% Median rent for a one-bedroom apartment: $681 Median home list price: $166,700
El Paso has a lot in common with McAllen. It also lies along the U.S.-Mexico border, suffers from a high unemployment rate and slow economic growth, and is seeing home costs rise. And it has a large Mexican-American population that is generally favorable toward children living with their parents well into adulthood.
So even though housing is pretty cheap, many local residents still can’t afford their own digs. But many wouldn’t want ’em anyway.
4. Bridgeport, CT
Percentage of millennials living with parents: 45.2% Median rent for a one-bedroom apartment: $1,134* Median home list price: $725,000
Many millennials want to live on their own in coastal Bridgeport, but there aren’t enough homes to go around—especially in the right price range.
Because of its close proximity to New York City—about 70 minutes away on an express Metro North train—Bridgeport is a popular commuter hub for those looking to save a few bucks and former city dwellers craving extra space for a family. The downtown is also experiencing a resurgence, especially in the Black Rock neighborhood, with bars, restaurants, and live music venues popping up.
That’s led to “a big inventory of renters and a small inventory of rentals,” says local Realtor Gail Robinson of William Raveis Real Estate.
5. Miami, FL
Percentage of millennials living with parents: 44.8% Median rent for a one-bedroom apartment: $1,062 Median home list price: $379,500
Millennials are spreading across Miami like bronzing oil on sunbathers. The beaches, all-night parties, and jobs have made it the second most desirable U.S. metro for millennial home buyers, according to realtor.com.
Even with the influx of young residents, the Magic City has one of the highest percentages of millennials who have yet to fly the coop. Thank the killer rents and rising home values. After college, many South Florida kids head home to continue with graduate studies at nearby universities or enter the job market. But they’re often met with entry-level salaries that cannot keep up with the elevated cost of living.
“I have a lot of clients who live with parents and save up little by little until they’re ready to buy something,” says local Realtor Giovanna Calimano, of Yes Real Estate.
The rest of the top 10 metros where millennials are most likely to live with their parents are Riverside, CA; New York City; North Port, FL; New Haven, CT; and Worcester, MA.
* The average rental price for a one-bedroom apartment in June, according to Rent Jungle.
The post Where Millennials Live Alone—and Where They’re Still Crashing With Mom and Dad appeared first on Real Estate News & Insights | realtor.com®.
from DIYS http://ift.tt/2uXIaWT
0 notes