There are many ways you can measure how many people in a country are poor. Quite common is the use of a so-called poverty line. First you decide what you mean by poverty – for instance an income that’s insufficient to buy life’s necessities, or an income that’s less than half the average income etc. Then you calculate your poverty line – for instance the amount of income someone needs in order to buy necessities, or the income that’s half the average income, or the income of the person who has the tenth lowest income if the population was one hundred etc. And then you just select the people who are under this poverty line.
I intentionally chose these examples to make a point about absolute and relative poverty. (I thinks it’s important to distinguish types of poverty and different definitions of poverty). In the U.S., people mostly use an absolute poverty line, whereas in Europe relative poverty lines are used as well. As is clear from the examples above, an absolute poverty line is a threshold, usually expressed in terms of income that is sufficient for basic needs, that is fixed over time in real terms. In other words, it’s adjusted for inflation only and doesn’t move with economic growth, average income, changes in living standards or needs.
A relative poverty line, on the other hand, varies with income growth or economic growth, usually 1-to-1 since it’s commonly expressed as a fixed percentage of average or median income. (It obviously can have an elasticity of less than 1 since you may want to avoid a disproportionate impact on the poverty line of very high and very volatile incomes. I’ve never heard of an elasticity of more than 1).
Both absolute and relative poverty lines can be criticized. Does an absolute poverty line make sense when we know that expectations change, that basic needs change (in contemporary Western societies, not having a car, a phone or a bank account can lead to poverty), and that the things that you need to fully participate in society are a lot different now than they once were? We know that people’s well-being does not only depend on the avoidance of absolute deprivation but also on comparisons with others. The average standard of living defines people’s expectations and when they are unable to reach the average, they feel excluded, powerless and resentful. Can people who fail to realize their own expectations, who lose their self-esteem, and who feel excluded and marginalized be called “poor”? Probably yes. They are, in a sense, deprived. It all depends which definition of poverty we can agree on.
It seems that people do think about poverty in this relative sense. If you compare the (rarely used) relative poverty line of 50% of median income in the U.S. with the so-called subjective poverty lines that result from regular Gallup polls asking Americans “how much they would need to get along”, you’ll see that the lines correspond quite well:
(source, click on the image to enlarge)
So if relative poverty corresponds to common sense, it seems to be a good measure. On the other hand, a relative poverty line means moving the goal posts for all eternity. We’ll never vanquish relative poverty since this type of poverty just moves as incomes rise. It’s even the case that relative poverty can increase as absolute poverty decreases, namely when there’s strong economic growth (i.e. strong average income growth) combined with widening income inequality (something we’ve seen for example in the U.S. during the last decades). (Technically, if you use the median earner as the benchmark, relative poverty can disappear if all earners who are below the median earner move towards the median and earn just $1 or so less than the median. But in practice I don’t see that happening).
Some data on relative poverty in developed countries:
(source, the relatively dismal number for the U.S. is partly due to very high incomes at the top)
More posts about poverty measurement are here.