We have not been monitoring the Austrian election, nor predicting it formally (and correctly) as we have done for the major elections from Brexit
onwards. However we did predict that most OECD countries are moving to a specific model, viz our Euromodel (initially built after the Dutch Election in March
. based on what we saw in Brexit and the US Election):
In essence, it is a move away from the status quo, via a movement to the poles - right and left. How it plays out depends on the country, so far in the northern European and US it has mainly been a major shift to the right and a smaller one to the left. It may well be the reverse for Mediterranean countries, but in northern Europe the model predicts:
1, The main Centre-Right party moves considerably more Right, adopting quite a few of the Far-Right policies and narratives. This ploy may lose some of its more centrist supporters (to who though?) but it prevents the Far Right from taking far more right wing supporters. If it doesn't (Germany) then the Far Right takes a larger share
2. The Centre-Left is decimated, their various "Deplorables" classes - traditional working class voters - go both left and right. This leads to a rise in the Far-Left numbers. The hard to predict move is the Centre left voters who move rightwards, as there is no natural home for them at the moment, and they seem to scatter depending on local conditions.
Thus this is what we expect to see in Austria, we shall be keeping track.
Update - as of c 5pm UK time Sunday, the following early results are being reported by by Austrian broadcasters:
People’s Party 30.2 % (Center Right)
Freedom Party 26.8 % (Right)
Social Democrats 26.3% (Center - Left)
Update - Monday c 5pm - Final results are
People’s Party 31.4 % (Center Right)
Freedom Party 27.4 % (Right)
Social Democrats 26.6% (Center - Left)
The Right Wing Freedom Party has made gains, and the Centre-Right People' Party has moved more right in order to counter them making even greater gains. (Stipulation 1 of the Euromodel). This is also lowest the Social Democrats have polled ever (though it is hardly changed from the last election), but they are now the 3rd party and it appears they will not be a part of the governing coalition for the first time ever and that represents a major drop in their influence (Stipulation 2 of the Euromodel) though they have been saved further ignominy by Austria's Greens (our model says a shift to the far left is the main flow, Austria has confounded this a bit by having a Far left implosion)
There are the usual protests of "dirty campaigning" and "fake news", and this is a recurring refrain
across every election since Brexit. The reality is however, as even former US President Obama has alluded, that there is a major sociopolitical trend happening across the OECD
. There are also headlines of a "shock result" whichh by now is very odd, as we have seen it in elections in country after country over the last year or so, and there will be more to follow unless some major structural changes are made in the EU and even more broadly in the OECD.
Big picture, Democratic OECD countries seem to reaching c 50% of the population who believe the current system is not serving them, and these are voting against the status quo and driving major changes country by country. How and why they do it depends on the country, though there are a lot of common traits and issues. To mitigate this trend however requires these countries and their greater institutions like the EU to make fairly major changes to their status quo structures, which so far we see little sign of - so these "shock" votes will probably continue.
Above - system map of the US Election dataswarm in process. Trump is in front - this was at a time the mainstream US media and polling was saying Clinton was far in front, and we then knew an upset was very likely coming.
Using our systems we have managed to predict 5 elections (strictly speaking 6 as the French one is 2 rounds) correctly over the last year, 3 very much against all the polling predictions (Brexit, US, UK 2017) and one that still produced "shock" results (Germany). We went public on 5 of them, before election day in each case. (The bullet points below link to blog posts on our DataSwarm
site of each of the individual election prediction summaries)
We find that with commercial work measuring aspects like "sentiment" or "influence" and similar can give rather nebulous results. Measuring the true picture is not always simple.
- Predicted Trump win – we went public with our prediction 3 days before the election. The polls said Hillary was a shoo-in, but the system saw a close race and a dynamic support base that Trump had built up from the beginning and plumbed for him.
- Predicted UK election – went public 2 days before. The system predicted a hung parliament, we got one. Polls were predicting a major Tory win.
- Predicted French Election (both rounds) – went public 2 days before both. Got both right. System underestimated scale of Macron win in second round, but in our review we learned how to more accurately predict future voter flow from the system's memetic linkages - major learning point bonus!
- Predicted German Election – went public 2 days before. Predicted AfD surge and CDU crash better than any polls.
- Brexit was an initial side project, done with a bit of social media sampling and system dynamic modelling, but as that worked and we got it right - despite all the poll opinions being completely the opposite - we decided to carry on with other elections as as we found that a "known" outcome at a point in time allowed us to calibrate the algorithms quite well.
How do these really work, and how do they relate to what we can measure? Using known outcomes like election would really help calibrate these metrics, and we could show our clients how our system definitely was working correctly
. Besides, if we got a few elections right we might open up a new area of business!
Each election taught us something new about the way the algorithms should work, and what needed tweaking or expanding, and we learned some new insights about how human behaviour in reality maps to the metrics you can "see" with social media analytics. We also learned quite a bit about how media works in persuading people, and how "fake news" et al operates. We also proved to ourselves that the system was good in 2 English speaking cultures (US and UK), and could also operate in French & German language and culture - and could crunch tens of millions of units of social media data.
We also had quite a few insights about what is happening at a population level politically, and our views differ quite a lot from a lot of the "conventional wisdom" we saw during and after the elections.
These are some of the high level takes on each election:Brexit, 2016
We did not put the main system on this, but used a series of small data samples over the election period, and then built a fairly simple system dynamic model to predict outcomes of a 2-horse election race. Turned out it worked, and against all the conventional wisdom and poll predictions to boot. The Remain camp (the favourites) seemed to go in with a static strategy and refused to shift it when it was clear it was losing. In essence, Remain's arguments, exaggerated by "Project Fear" style messaging, were increasingly being perceived by neutrals to exaggerate the risks and that led to an increasing resistance to their message and it gave a foothold to the pro-Leave media to start to land some telling blows (with similarly exaggerated, but more positive claims). Cycle this through a few times, increase the hectoring volume as Remain started to panic a bit, and more and more people just switched off to these messages. Also, the increasingly vicious insults on Leave voters (more on insults later in the paper) alienated many people. The sampling showed increasing support for Brexit and it was increasingly clear to us that Remain were increasingly mortally wounded and would lose (Probably - there is always a range of error in these things).The US 2017 Election
We tracked the primaries to "train" the main system, and it correctly predicted Trump & Clinton wins, though Sanders was a major challenger to Clinton for a very long time. We don't count these elections as some of our predictions, as we never predicted them per se (Maybe we should in future).
In tracking the actual Presidential election race from August – Sept 2016, we saw Trump was winning from the beginning, we believed the polls were wrong and suspected another upset was coming. Clinton had caught up a lot by the end, but the system was saying Trump edged it. We thought it was going to be very close, very certainly not a Clinton barnstorm as the polls were all saying. We had seen this in Brexit as well so we decided to trust the system. We gulped and went to press, calling it for Trump, and he won. (Only just, as the system predicted). Why did we see what was right while the polls got it so wrong? We think there were a number of causes, we think we were getting a more realistic picture via social media, but frankly another reason seemed to be the pollsters themselves - they seemed unable to believe what the data said.The UK 2017 Election
The Tories were supposed to have gone into the UK election with a major lead. We saw no huge Tory lead, ever, they were just a bit ahead. In week 1 the LibDems made most of the running but quickly fell away as the main parties got going, After another week or so Labour started to close the gap on the Tories. Labour's Manifesto was a step change - from about then on Labour started to gain on them much faster. As with Brexit, the Tories seemed to go in with a static strategy and didn't shift it when it was clear it was losing ground. The only topics to "break the surface" as very influential were Brexit and the NHS, and Scotland in far 3rd place. All others were in the noise, it wasn't the Economy, stupid. From about 2 weeks before election day our system was predicting a hung parliament. Now here come the embarrassing bit. Us data analytic humans messed it up, we "knew" from the UK 2010 and 2015 elections that social media was still a bit left leaning so we over-compensated for that and called a range, from hung parliament to a stay the same outcome, with a reduced majority as the most likely outcome. It was a hung parliament, the system was bang-on, and we decide from then on to "trust the system" (We were still far closer than any polls by the way, they were all predicting an enlarged Tory majority)The French 2017 Election - round One
This was more complex than the previous 3 elections we had tracked in that there were multiple candidates, but with support levels far closer to each other than the US primaries where clear winners emerged. Our system indicated Macron and Le Pen were front runners, Macron had more support than Le Pen, and it was proved right. Another correct call to us. But we had an interesting lesson here. Quite close to the end the system was saying another candidate, Hamon, looked like a strong competitor, even potentially leading Le Pen. The French polls, which are usually very accurate, said he had started well but was fading fast. When we looked at our latest daily data he was indeed not near the front. It turned out his support had fallen over the first round but our algorithms were hanging on to too much history, over-weighting his early high ratings. That was a very useful lesson in algorithm trend tracking calibration!The French 2017 Election - round Two
It second round was a two horse race, as with Brexit, the US and the UK elections. Easier going for us. Our system indicated Macron was in front, Le Pen was second, and we predicted he would win and we got it right. But, the system said Le Pen was closer behind than the eventual polls showed. This was the only time in all the elections that we tracked that the polls were more accurate than we were. So we started to backtrack through the system's calculations and found something very interesting. We found the relationships of other candidates to Macron or Le Pen predicted the relative shift of that candidate’s supporters to the two finalists. Summing the relative shifts of those voters gave a far closer split to the actual election results. We had just stumbled over how to analyse future voter intent. We did not have this problem with Clinton/Sanders, probably because there was more time between primary and final election so voter shifts had largely happened by the time voting started.The German 2017 Election
Tuning the system using the lessons learned in the French rounds helped hugely, as again it was a multi-horse race which is more complex. We called the German election pretty well, given this challenge. We predicted both the “shock” AfD jump and CDU/CSU drops correctly, against the polls’ predictions. Our overall error % was the same as the polls on the day. Why? Most of our error was due to another "fast faller" - the SPD - and the system, despite our tweaks for Monsieur Hamon in France, still over estimated the early history. and we over-estimated SPD support from this history. More tweaking will be required on this. Another point - we found underlying AfD support is larger than the election would suggest, but that influence on voting outcome was lower than the other parties on the day. We think true AfD support is several % points higher. This will probably come to light once they get into parliament, and get more exposure. Assuming that is, they don’t fall apart instead – their largest personality (by far), Frauke Petry, left the day they were sworn into the Parliament!The System
The system's logic has in our view proved it is dependable over a range of conditions, languages etc, and that the algorithms are seeing and indicating a true picture and are fairly accurate in their predictions. We have learned a lot about calibrating some of the more vague terms such as sentiment and influence. We had also built our own hardware stack (we found commercial cloud systems too slow / too expensive for this sort of data crunching) so we were quite pleased it held up to the task. We also noted that we got these results not by spending long hours tramping the ground, or the focus group grind, but by our computers crunching tens of millions of social media data items. The only walking we had to do was to the coffee machine.Political Trends
There is a huge amount of interesting data we now have to look at political trends and insights, and to derive lessons for future elections, and we are going through it all with great interest - but some general points that were immediately apparent are:
- In all 4 countries, over 6 elections, there are a large number of voters - traditional working class and many traditional "non voters" - who turned out for the candidates who promised a shift away from the status quo. Politicians ignore this trend at their peril.
- There seems to be a decline in the centre of politics, and a shift to the poles of each political spectrum. Various countries' electoral systems and parties handles this in various ways but the trend is marked.
- That "insult" thing we mentioned above - casting ad hominem insults on the other sides' voters seems to be a bad idea. We noted in all elections, in various ways, that insulting the other sides' voters just created more of them. So a lesson for future candidates - don't call the other side's voters "Deplorables" (and worse) - that is guaranteed to generate more of them.
- Humans are humans, culture, language, electoral system are lower order variables - 6 elections, 3 languages, 4 cultures, the same approach worked.
- In similar vein, the system also worked across various alleged levels of "Fake News", Russian involvement, Electorate manipulation, voting machine rigging, Bot spoiling etc etc, and pretty much delivered a good result each time. The system handles most bots quite well (in our humble opinion), but we are starting to think that - given our algorithms were basically the same and produced the same approximate quality of predictive results across apparently highly varying levels of all these spoilers over a number of elections - that "Fake News", "The Russians" et al were not the decisive factors in any of these elections that some believe they were.
- In addition, there is some form of major systemic problem with US and UK polling, the French and German polls mapped to the final results far more closely.
Lots more to come on this aspect over time though.....what we can say is that we disagree with quite a bit of the "popular" analysis of the political press and pundits about why various things happened, and what it all means going forward.
They say predictions are dangerous, especially about the future. For better or for worse we tracked Brexit, the US Election, and the UK 2017 election and got them right despite the polls. We also tracked the French one and got the right guy, but underestimated the support level - see over here
Well, we have been tracking the German election using social media since February, and 6.8 million tweets later we have some predictions on Friday we made an attempt to predict the German election using our own data analytics systems that analyse social media, and the results for the 7 main parties were were (assuming the 7 main parties are c 95% of all votes:
AfD Election outcome = 15% (- but a wide range variance of c 12 - 18%)
CSU 8% (CDU + CSU = 33%)
The range is typically +/- 2% (except the AfD, as discussed below)The usual caveats
Note that German Twitter usage is relatively low as a % of population, so (as was true in UK and US in the low penetration years) it may still be left leaning if it is still mainly early adopters, so our predicted SPD and Linke figures may be a little high by a few %. Ditto, the FDP figure could be under-represented.
But the real question mark is over the AfD. Why is our system showing a most probable result so much higher than the polls, and saying there is a wider range. (Incidentally, it shows there is even a remote chance it could be greater than 20% so could there be another upset?) The issue is the sheer volume of support (see blob picture above) yet lower possibility of votes (it's quite far to the left of the CDU and SPD on a log scale), i.e. about 1/2 the impact.
Or are we just wrong re the AfD - the polls are saying 11%?
(Update - many adjusted to 12% on Saturday after we submitted this)
In brief, when it comes to these "alternative" party outcomes the system has seen the "right" picture in all previous elections, and we are betting it will do again. Our ingoing assumption is the the "Shy voter of unpopular party" effect is in play, because it has been before in similar situations.
(And it ain't about bots...bot activity has been very low (and besides our system is quite good at ignoring them)
The rest of the discussion about what we are seeing and thoughts about why are over here on our Dataswarm site
(Update) As of Sunday evening exit polls it's been none too shabby though it seems the SPD has undershot at the benefit of the smaller parties and the AfD is c 13-14%, not 15% as we predicted, but higher than poll predictions. We shall see by tomorrow
So the UK General Election results are now known except for 1 out of 650 seats, Tories have 318 seats but needed 326 for a majority of 1. The Tory eventual vote share in 2017 was 43%, Labour was 40%, a 7% difference an puts the Tories 8 seats below. In 2015 it was Tories 36.9%, Labour 30%, a 23% difference.Our algorithms had it spot on
, predicting a hung Parliament, lower and median were hung parliaments with the upper bound a slightly smaller Tory majority. So we got it, right?
Well, yes, the algorithms behaved very well, but the humans (mainly me) didn’t.
The reason was the worry over social media biassing too liberal/labour, as it did in 2010 and 2105. Now we knew it wasn’t as biassed as 2015, given the larger demographic now on it, but we (I) thought there would be a bias, which we put at somewhere between 3 and 8%, so our adjusted eventual range meant the median was a small Tory win, the high end was a bit better than they did, the lower was a hung parliament. The predictions were very tight - we were talking c 20 - 30 seat spread over 650 seats, but the Tory win was on the bottom limit, and we actually called the median as a small Tory win
So - algorithms got it (there was nearly no social media bias in 2017) and the humans over-compensated for the bias.
(To be fair, this was still a range from hung to “slightly better than last time”, with a tiny solution range and it was a damn sight better than nearly all the polls were able to do, and way better than the pundits)
Another thing we'd note is that the narrative played out in the polls, punditry and press about the election bore little relationship to what we were seeing, and as our systems were nearly spot on in their prediction and most of the above were way off, I am inclined to believe our system a lot more (It also makes me very cynical about the ability and motives of said polls, pundits and press). To summarise, we saw the following:
- There was no "huge Tory lead", it was a chimera, they were fooling themselves.
- In week 1 the LibDems made most of the running but quickly fell away as the main parties got going
- After a week or so Labour started to close the gap on the Tories.
- Labour's Manifesto was a step change. We saw no sharp impact from the Tory Manifesto, except from about then on Labour started to gain on them faster. They were quite close to each other so may all be wrapped up together..
- The only topics to "break the surface" as very influential were Brexit and the NHS, Scottish referendum a distant 3rd. All others were in the noise.
- From about 2 weeks before election day the end outcome had emerged on our system, if you go back to our post 1 week before the election
you can see the result graphically and it hardly changed.
In short – fire the human, promote the algorithms!
We have turned our analytic engine onto the UK election, looking at social media to predict it. It has worked for Brexit and Trump, got both French phases right (but underestimated Macron). It says that the Tories will win, with about the same majority as now, with a margin of error somewhere between a small loss (largest party but not enough to form a majority) and a c 20 - 25 seat lead).
(Update - the results are now known, and it was the lower bound of our prediction. In fact, the algorithms had it spot on, the error was me adjusting for possible social media bias - see below - which was non-existent in fact.)
You can see more detail of the prediction here on the DataSwarm site
However, the hard bit is predicting the bias of Social Media - in 2015 Social Media thought Labour would win, but the Tories squeaked in, there was quite a strong bias towards Labour on social media . We don't know where social media is now, we believe there is less of a bias as there are more people on it ,so that means it is more representative of the demos. In our view it is worth several additional % points which gives an outcome somewhere between a hung vote and several tens of seats lead, on average about the same.
Tomorrow, all will be revealed and we will no doubt be tweaking the predictive algorithms again.
More Recent Articles