חדש באתר: NotebookLM עם כל תכני הרב מיכאל אברהם. דומה למיכי בוט.

Q&A: A government against 2/3 of the public. Is it reasonable that this is just propaganda?

Back to list  |  🌐 עברית  |  ℹ About
This is an English translation (via GPT-5.4). Read the original Hebrew version.

A government against 2/3 of the public. Is it reasonable that this is just propaganda?

Question

The poll published this evening showed that about 2/3 of the public oppose the revolution the government planned to carry out.
I didn’t see the details of the poll, but that’s how it was reported on the news; the Hadad poll sounds reliable.
But still?
Just about 5 months ago, almost 49% of the public voted for this government.
That’s admittedly not a majority of the public, but it’s much more than roughly the third that this poll is basically giving it.
What happened that its standing dropped?
Option A.
It’s true that almost 49% wanted this government in general, but specifically far fewer support the revolution.
Option B.
It’s true that it had the support of almost half the public, but there has been a growing erosion in support for it.
Option C.
The government didn’t explain itself, the opponents did, part of the public was persuaded, and the result is that it dropped from a minority of under 49% support to a minority of around a third.
Option D.
All the answers are correct.
 
What does the Rabbi think is the correct answer?
Or are there other possibilities?
 
 
 

Answer

I don’t see what importance this discussion has, why it belongs on the site, and especially what added value I have regarding it. I assume it is mainly a matter of understanding what the reform includes, and also its implications, which were not clear until it began to be carried out. Of course other possibilities could also be correct, but that seems to me to be the main point.

Discussion on Answer

Ish (2023-03-29)

I assume that the possibility that the poll is wrong also exists (I, for example, voted in the last election, but I haven’t answered any poll recently).

B (2023-03-29)

After the newspaper Haaretz fired Gadi Taub because this is wartime, there is no small chance that the polls are engineered. Usually the editors of these polls belong to left-wing circles (a fact shown by how Likud voters enjoyed messing with them in the TV exit poll, to the point that Mina Tzemach wanted to resign because of the lies), because this is a subfield of the various media fields where there are the "correct opinions." In any case, Nadav Shnerb has posts about conducting polls:

Post from 30.10.22: "A remark tossed out by Amit Segal sent me to check what the hell election pollsters in this country are actually doing. My conclusion: the pollsters are not reporting to you the results of the polls they conduct, but their guess about the election results after they have seen the polls.
Here’s the argument. In polls they usually sample 500 people at random from the country’s population. Now suppose that the United Torah Judaism party has support corresponding to seven seats. How many people in the random sample will answer that they intend to vote gimel?
To get seven seats you need to win 5.83 percent of the vote. If you check the numbers, you’ll see that the pollster has to declare that a party got 7 seats if, out of the 500 people sampled, between 27 and 31 said they would vote for Goldknopf’s party [1].
And here’s the question: suppose there is a population in which exactly 5.83% of the people are Agudah voters. If I ask 500 random people, what is the probability that the number answering ‘Agudat Yisrael’ will be between 27 and 31? I checked, and the answer is 0.367, not far from a third. From this it follows that if the pollsters report the poll results, and not their fantasies, then in two-thirds of the polls Agudat Yisrael should have gotten a different number — five or six or nine, not exactly seven. Seven should appear more than any other number, but certainly not exclusively.
And from here to the actual results: since the beginning of September, Channel 12 News conducted nine election polls, and in all of them (!!) the party of Gafni and Goldknopf got exactly seven seats. The pollsters of Channel 13 showed the same reliability (nine out of nine), Kan 11 (7/7), Channel 14 (7/7), Israel Hayom (3/3), and Army Radio (2/2). The only ones with any fluctuations were the people at Maariv: out of 8 polls, two gave Agudah six seats; the rest (6/8) stuck to the legendary number seven.
So what is the probability that the pollsters are reporting the results they actually got in the poll? The probability that nine polls would give us exactly 7 seats, assuming again that the percentage of Agudah supporters is 5.83, is about one in ten thousand. The probability that more than 40 polls (if I sample all the channels) would give only two results different from seven is, under the same conditions, something too tiny to talk about, a millionth of a millionth of a millionth or a bit less.
I did a similar check for Ra’am, where the situation is even worse. Out of 42 polls, only one, by Channel 12, gave a number different from four seats. Again — we are supposedly expected to believe that in 41 cases they asked 500 random people and in each of those cases between 17 and 20 people said they would vote Ra’am. The probability — zero.
The equality between the blocs also reflects a similar statistical problem. In cases of a tie, or 61:59 in favor of one bloc, the larger bloc should get between 250 and 256 votes (otherwise it would have 62). Assuming the public is split exactly fifty-fifty between the blocs, we would expect that in a sample of 500 random people there would be a little over a fifty percent chance that one bloc would win by a gap of more than two seats. I didn’t check, but it seems to me that for some reason all the recent polls are in the range between 59 and 61. Again — an absurdly improbable result.
In my opinion, one can assume fairly confidently that this whole polling business is fabricated. The pollsters start from some basic assumptions about population sizes that they ‘color’ as voters of specific parties, and choose the respondents in the poll not randomly but in a way that will produce the ‘correct’ result. It is very likely that they also look right and left, examine the forecasts of fellow pollsters, and ‘iron out’ their own results to cover themselves: image-wise, it is much cheaper to be wrong together with everyone than to be wrong alone.
What does all this mean? To the best of my understanding, the conclusion is that it may be the case (not certain, of course) that the pollsters are missing by a mile, that there is no tie at all between the blocs, and that parties will get far fewer or far more than the polls predict for them. As always, the public (that is, me and you) tends to remember the bad and not the good, the pollsters’ big mistakes and not their successes. Therefore we give the pollsters a clear incentive to predict a tie (that way they will never be too wrong) and to align their forecasts with those of their colleagues in the profession (that way they’ll be wrong ‘with the flow’). The result — polls worth much less than we imagine."

Post from 20.2.23: "Here is an interesting phenomenon I came across through a discussion, on a somewhat different topic, on Andrew Gelman’s blog.
There is something people call ‘public trust in the media,’ and there are research institutes that try to measure this trust, both within a single country and comparatively across many countries around the world. One of these institutes is associated with Reuters; the second is called the Edelman Trust Barometer. What these institutes do is public-opinion polling in which the public is asked about its attitudes toward the media.
It turns out, much to everyone’s surprise, that there is no connection whatsoever between the results reached by the two institutes. There is no significant correlation either at the level of the result (what percentage Institute A gives to public trust in Chile, say, versus Institute B) or at the level of trends (by how much public trust in Chile rose or fell over a given period of time).
What is going on here? I assume that two such respectable institutes use standard polling methods. Moreover, it is hard to imagine that they have some political interest — why should some American researcher care how popular the media is in Indonesia or Turkey? Well then — how did it happen that there are such differences, that in effect there is no connection between the results?
The person who noticed the point suggested that the difference stems from the wording of the question. One institute asked, ‘Does the media do the right thing,’ while another asked people, ‘Do they believe most of the news most of the time.’ Interesting. But however that may be, you can see here how unreliable and inconsistent public-opinion polls of this kind are, even when they are conducted by professionals who have no vested interest.
Compare this to election results: here in Israel we had five election campaigns over a period of four years, and the results barely changed. There was more or less zero movement of votes between blocs, and aside from the story with Bennett, who decided (according to him — with open eyes) to publicly spit on his base, even the parties got more or less the same numbers of votes. The results changed from one election to the next only because of politicians’ ‘betrayals’ of their bloc, or because of tiny fluctuations intensified by the high electoral threshold. After all the propaganda, the provocations, Guardian of the Walls, the coronavirus — nothing moved. People vote for things that seem important to them, and events, or the twists of the political argument, did not seem important enough to them.
Conclusion: when people tell you about ‘the loss of public trust’ (in the Supreme Court, in the Knesset, in the coach of Israel’s national team) or about ‘the positions of Likud voters/the left’ — take the story with a gigantic grain of salt. It is reasonable to assume that the way the question is phrased in such polls affects the result dramatically, and when interested parties are involved — don’t even bother. Ignore it. Just ignore it."

Michi (2023-03-31)

Of course. Anyone who says something that doesn’t fit the Bibists is a leftist. And that’s how the Bibist theory becomes unfalsifiable. Good luck with that.
As for the stupid claims you raised here, the fact is that most polls are amazingly accurate in predicting election results. Somehow their abilities drop dramatically in places where the results cannot be falsified (because there are no elections that would measure them). There any Bibist can raise ‘learned’ claims explaining why the polls are worthless because everyone is left-wing.

השאר תגובה

Back to top button