Editorial Type: research-article
 | 
Online Publication Date: 31 Dec 2024

APPLYING MACHINE LEARNING IN THE U.S. POLITICAL LANDSCAPE: FORECASTING, POLLING, AND THE DOMESTIC SUPPLY CHAIN

and
Article Category: Research Article
Page Range: 52 – 57
DOI: 10.56811/PFI-22-0020
Save
Download PDF

There are areas of the current U.S. political system that are fit for the implementation of machine learning capabilities. Some of these areas include election forecasting, polling, and vote-by-mail services. The introduction of machine learning tools to the components of the U.S. political system can result in increases in efficiency and accessibility throughout the sector. By doing so, there would be an assumed decrease in necessary labor. This research analyzed the specified areas of U.S. politics by establishing a four-step framework based on a previous exploratory case study into the application of machine learning to operations management and adjusted said framework to fit the needs of the nature of U.S. politics. This framework was utilized to determine the objectives to which machine learning would be applied within this sector. As well, it serves to guide the technology or strategies necessary to achieve those objectives, illustrate the effect on performance and on stakeholders. This analysis introduces valuable insights for any decision makers as well as providing an accessible and informative review of some of the potential applications of machine learning in the existing U.S. political system.

INTRODUCTION

The goal of this research endeavor is to provide a clear, concise, and digestible summary of a few of the potential applications of machine learning at the forefront of the current political landscape in the United States. This will reflect three overarching areas of the U.S. governmental and political system: forecasting, polling, and the domestic supply chain. By applying a framework to examine these aspects of domestic politics, this research can establish a customizable approach to apply said framework toward different technological innovations and aspects of social sciences. This, as well, will hopefully simplify public understanding of the application of machine learning and artificial intelligence in U.S. politics as well as the government’s role in the supply chain.

A primary goal of this research endeavor is to introduce concepts related to machine learning tools in the U.S. political sector. Machine learning is most briefly defined by Baştanlar and Özuysal (2014) in their Introduction to Machine Learning as “enabling computers to make successful predictions using past experiences.” Machine learning utilizes algorithms to recognize patterns in data to then produce increasingly informed decisions (Columbia Engineering, 2022, p. 1). By assessing large sets of data also known as “training data,” machine learning processes are intended to model the relationship between inputs and outputs to deliver predictions that will be measured and evaluated (Baştanlar & Özuysal, 2014).

Machine learning exists as a function of artificial intelligence (AI) but as well serves as a trailblazer for creating functional AI (Columbia Engineering, 2022). It is becoming evident that AI will have grand applications in the coming years toward what we would have previously considered “intangible” data to produce actionable and quantifiable results, and machine learning processes will likely be leading the way.

The examples that are assessed herein demonstrate some of the major facets of the U.S. political system. The first assessment tackles initiatives involving political elections. This inherently requires an additional focus on accurate election forecasting and the ability to make valuable and actionable projections based on data collection and public opinion polling. The next assessment addresses the aspects at which the U.S. government interfaces with the domestic supply chain here in the United States. This includes government operations such as vote-by-mail services. With ever-changing demographics across the United States and the COVID-19 crisis that the global supply chain faced, it seems increasingly important to give attention to machine learning’s potential in this capacity. The 2020 elections shifted the American public’s views on how we are able to vote, placing significant attention on the safety, efficiency, and effectiveness of the U.S. mail-in vote system. Providing insight into the uses and ways to integrate machine learning into these facets of the U.S. governmental system will likely increase the ability to make informed decisions regarding approaches to candidacy and elections as well as influencing the greater operational efficiency of the U.S. Postal Service (USPS) in providing its necessary political services.

METHODOLOGY

Quantifying social sciences can require different approaches depending on the targeted result. Applying a framework of methodology to understand how machine learning will have an impact on each of these topics allows for an individualized approach while maintaining consistency. The methodology used is following the framework in Helo and Hao’s (2022) exploratory study of the application of AI to operations and supply chain management. Helo and Hao’s (2022) work assessed the use of AI within the supply chain, and this research links these efforts to the realm of the supply chain within which the U.S. government operates. By distilling the key principles of this assessment into its most important principles a practical framework for this assessment was established. This framework provides criteria to organize not only the performance- and people-focused objectives that can be achieved, but also illustrates the necessary technological means to achieve critical objectives using machine learning in U.S. politics.

In the case of this study, Helo and Hao’s (2022) exploratory approach can be broken down into four major steps (objectives, technology/strategy, performance, and people) that can be applied individually to each political component that is being assessed (Helo & Hao, 2022):

  1. Objectives: Analyze or define the objectives of the implementation of machine learning on the component. (What can be improved using machine learning?)

  2. Technology/strategy: Specify the technology or strategies required to implement machine learning on the component. (What technology/methods are needed to improve?)

  3. Performance: project deliverable impact on key performance metrics.

  4. People: project deliverable impact on stakeholders involved.

ANALYSIS

Using the four steps outlined above, an assessment will be done in order to further understand the application of machine learning on election politics in the United States. To provide a more detailed examination of election politics, this assessment will specifically cover the larger aspects of election forecasting and public opinion polling. These tools were chosen based on their inherent value in the U.S. political and financial spheres. These tools capture large participating audiences and penetrate many segments of voters in the United States, making them seemingly apt subjects for assessment for applying new technology or operational procedures. Research centers are able to sample diverse groups of Americans, allowing for more variation among polling responses, and online-access polls have nearly quadrupled in availability to voters since 2012, allowing for enhanced participation among the U.S. populus (Kennedy et al., 2023). An additional assessment will be done in order to further understand the application of machine learning on the domestic supply chain and particularly its relation in function to the political system itself. This portion of the analysis will examine vote-by-mail procedures as they function as a service of the U.S. government and participate in the greater domestic supply chain.

Election Forecasting

The term “forecasting” refers to the ability to determine outcomes before they happen, and in this case, it is particularly applied to an ongoing election (Abramowitz, 2004). To forecast any election results most accurately, statistical modeling and analysis is likely required. This sort of modeling requires considerations for demographic data, previous election history, public opinion polling, and more (Abramowitz, 2004). This breadth of information can lead data sets to become very large and require extensive statistical labor. The next step for any decision maker would be to evaluate the results of a forecasting model to provide informed and actionable decisions.

  1. Objectives: As previously mentioned, the industry of election forecasting has become more and more suitable to apply AI or its capabilities in some fashion. This can be attributed to the increased focus on statistical modeling. By utilizing machine learning, statistical labor in election forecasting can be reduced while providing increasingly informed and more accurate forecasts based on larger sets of aggregated data. Thus, the objective for this first analysis is to decrease labor required and increase accuracy of forecasts. Applying machine learning methods to the advanced modeling tools used to forecast any election provides a clear path toward improving the percentage of predicted outcomes. Between 2004 and 2016, 62% of expert election forecasts had accurately forecasted results for the presidential elections (Graefe, 2018). The potential of machine learning’s application can likely be a powerful force in driving this percentage of experts’ accuracy higher even if it is marginal. Marginal gains will become significant increases as any algorithm continues to discover valuable patterns to turn into actionable data. This improved data will allow campaigns to better apply resources and improve electoral win efficiency.

  2. Technology/strategy: In any forecasting attempt, one must establish a time period in which the forecast is to be made and the amount of information that is available within that time period (Kaggle, 2022). Once these parameters are defined, one can begin to develop a forecasting model in traditional programs, such as Python (Kaggle, 2022). Enabling more advanced machine learning strategies throughout the development of algorithms for a forecasting model can potentially allow for larger sets of data to be processed more accurately. This has previously been applied in the private sector to forecast demand, and in many cases, it has allowed decision makers to avoid both tangible and intangible hurdles that may be faced. More importantly, in the use of machine learning in forecasting demand, managers have been able to reduce incorrect decision making due to former incorrect forecasts (Falak et al., 2022). Applying a particular program competitively for this analysis can be challenging as most robust programs for forecasting demand using machine learning techniques are entirely proprietary. In this circumstance, the application of machine learning algorithms in their most traditional sense to political forecasting models can likely reduce labor and provide more accurate forecasts.

  3. Performance: In the case of forecasting elections, the clear key performance metric would be the percentage degree of accuracy in forecasting the correct election result. This information can be highly valuable to campaign managers, public officials, and others involved in any political unit. Nothing so far contradicts the potential that machine learning can be applied in the same fashion as it had been to demand in the private sector, using larger data sets to expand their parameters of forecasting while detecting and eliminating inaccurate forecasts. Effectively, this would enrich the data quality and the ability to produce a more informed and accurate forecast for any election decision, giving campaigns or political units a better perspective in allocating resources.

  4. People: Regarding labor, an evident stakeholder would be the decision makers that benefit from advanced intelligence. The stakeholders also include programmers and data professionals developing and maintaining machine learning algorithms and forecasting models. Any decision maker or manager can utilize the information or decisions provided by a forecasting model. Based on its application in the private sector, there is evidence that the inherent quality of machine learning algorithms to use previous data inaccuracies to inform its future decisions would eliminate some degree of involvement by a programmer, processor, or manager.

Public Opinion Polling

Public opinion polling can be classified as a lot of things in the realm of the U.S. political system. But, particularly, this research sets specific parameters of how public opinion polling will be defined. For this analysis, public opinion polling will be classified as an independent, nongovernmental survey or questionnaire provided to a sample group with the intention of receiving information regarding the participants’ opinions concerning issues or elections (Pew Research, 2022). This information then can be utilized by political units, such as campaigns or interest groups, in order to make informed decisions about how to build policy, how to form an election platform, what demographics to target, and much more (Pew Research, 2022).

  1. Objectives: Over the past few decades, the format in which polling has been conducted has remained relatively static as typical questionnaire and surveying tactics generally provide the intended results from a sample. Although the mode or medium through which public opinion polling is conducted has continued to evolve and has become increasingly reliant on web services and online data collection (Pew Research, 2022). This creates a gap for machine learning techniques and AI to be applied and enable this essential political function to be prime for growth in the technological sense. The evident objectives for this analysis would be to utilize aspects of machine learning tools to not only improve the accessibility and effectiveness at which the data is collected, but also enrich the ability to process that data and turn it into more refined results.

  2. Technology/strategy: For this analysis, two particular machine learning programs were found to be especially applicable to the improvement of public opinion polling’s processing and medium on both ends of the spectrum. Initially, natural language processing (NLP) tools can be applied to the medium to improve data collection, effectiveness, and overall accessibility. Once data has begun to be collected, then machine learning principles can be applied through deep learning programs to process more unstructured data. NLP is a machine learning tool that is built to help computers communicate and interface with humans in their own language, allowing for spoken language feedback (SAS Analytics, 2022). NLP can be utilized for public opinion polling by allowing participants to submit survey answers using spoken language, increasing the overall accessibility of the entire population’s ability to participate. Promoting accessibility through NLP creates an even playing field for all Americans to have their say in any polling effort. Deep learning refers to a complex structure of algorithms and neural networks, specifically modeled after the functions of the human brain (Wolfewicz, 2022). Most importantly, deep learning allows for the processing of unstructured data, which, in this case, is especially valuable if polling were to utilize NLP for data collection. Deep learning programs can process images, sounds, or text in a brain-like fashion, utilizing its neural networks (Wolfewicz, 2022). In combination, these tools can enable decision makers to build better informed policy based on less structured and naturally spoken data. Both NLP and deep learning programs function separately and typically do not interface (Upgrad, 2020), but this approach would allow for pollsters to apply both technologies to improving polling in a variety of ways.

  3. Performance: In this approach, as data is collected through means of NLP, it must then be processed and refined efficiently. Deep learning programs could then be utilized to draw conclusions based on unstructured, spoken audio recordings (Wolfewicz, 2022). Conclusions using deep learning are drawn using similar logical structure to that which humans would use (Wolfewicz, 2022), leading this analysis to believe its capabilities are applicable to processing audio data collection done through NLP-driven public opinion polling. The utilization of deep learning in polling projects has a much more significant impact on the performance metrics regarding the efficiency and accuracy of processing unstructured data sets as opposed to NLP having a more direct impact on the stakeholders involved.

  4. People: Regarding the implementation of NLP to the medium of data collection, the impact is much greater on the metrics of the people to whom it applies. The primary stakeholders would be the U.S. citizens who are participating in public opinion polling. This group is in an especially good position to benefit from the use of NLP in public opinion polling as their accessibility increases exponentially through the ability to use spoken language. Effectively, this would allow Americans regardless of age, language, or disability to participate equally. Particularly, this can be helpful in accommodating circumstances through the use of a spoken question-and-answer format. This can already be achieved with audio recording, but the NLP machine learning programs are able to process this audio and convert it into meaningful data (Chapman et al., 2011). The utilization of deep learning will also have a degree of impact but on a different subset of stakeholders in the polling process: the pollsters and decision makers. Pollsters are the individuals who conduct or analyze polls (Merriam-Webster, n.d.), and the decision makers are those stakeholders previously described as having the ability to take action on the results of polls. Collecting more diverse and accurate information through NLP polling and being able to process it through deep learning applications will certainly provide a level of increased insight for pollsters and decision makers.

Vote by Mail

Vote by mail refers to the processes by which U.S. citizens are allowed to cast their vote in a state or federal election through a mail-in ballot. This ballot is delivered to the voters’ home, filled out, and then shipped back to the corresponding Secretary of State (USAGOV Contact Center, 2022). Many states extend a vote-by-mail option to all its eligible population, whereas other states require a request, and these are considered absentee ballots.

For this context, any reference to vote by mail will include absentee ballots and all mail-in ballots that would be processed for a state or federal election (League of Women Voters, 2020). Vote-by-mail procedures inherently require cooperation from the USPS and, in turn, directly participate in the greater domestic supply chain.

  1. Objectives: Vote-by-mail services rely inherently on the capabilities of the U.S. domestic supply chain. Ensuring the optimization and overall functionality of supply chain procedures could serve as an evident path toward improving vote by mail on a state and federal level. The objective would be to source and implement a machine learning tool that can directly improve the efficiency of vote by mail processing in both regional and national USPS facilities. Information regarding the techniques utilized to process mail-in ballots is strictly proprietary; thus, this analysis focuses on projections regarding the application of known machine learning tools.

  2. Technology/strategy: The term “processing” regarding vote by mail can be defined differently for different states. One commonality that reaches across the union is the requirement to compare the signature on the outside of the ballot with the corresponding voter’s signature. This is done to ensure a match and the legitimacy of the ballot (National Conference of State Legislatures, 2022). Vote-by-mail procedures could implement visual inspection through an AI-driven camera stream analysis. This analytical tool has been utilized to spot errors or inconsistencies in large production facilities (Helo & Hao. 2022). It can be trained easily using images and can effectively match signatures nearly automatically (Helo & Hao, 2022). Utilizing its neural networks, an AI-based camera stream analysis could match visual data and establish the necessary linkages to massive sets of statewide or nationwide data, such as a voter signature repository.

  3. Performance: Implementing AI-based camera stream technology would likely substantially improve the speed at which mail-in ballots can be processed as this technology would enable the initial processing (signature matching) to be completed while the singulated mail is still on the conveyor belt. Singulation refers to the process to create a single-file nonstacked flow of ballots so that a top read and bottom read camera system can adequately see the signature (Costanzo, Mark B., personal communication, November 2022). This eliminates the need to physically process and organize the mail-in ballots to then be authenticated by an employee or scanner. Eliminating extra unnecessary steps is a simple way of improving the overall efficiency of the supply chain. In circumstances such as presidential elections in which mail-in ballots are increasingly more popular, eliminating processing steps could lead to substantial returns in time saved and potentially produce election results faster. The specific technology can also improve the overall accuracy of the signature match and most certainly reduce the potential for human error or intervention by bad faith actors.

  4. People: The processing of mail-in ballots requires a couple of steps; the first of which has been established as the signature verification. This is typically handled by an employee who must manually verify the signatures or run them through a scanner (National Conference of State Legislatures, 2022). These employees would be the primary stakeholders. Ballots must then be opened, prepared for tabulation, and stacked accordingly by the same employee (National Conference of State Legislatures, 2022). This turns the processing of mail-in ballots into tens of thousands of hours of labor that requires a large employee base per each state. In the case of national or presidential elections, these labor hours can increase dramatically. Implementing the AI camera technology into the supply chain’s operational line can remove the initial processing step and potentially reduce the necessary labor hours substantially.

CONCLUSION

There is significant room for machine learning to begin to be applied to the U.S. political system in a variety of fashions. Approaching the varying components of the U.S. political system with an established framework enables the analysis of each aspect to provide insight into the necessary objectives and applications that have the potential to be utilized for future political success and efficiency. These analyses reflect the application of machine learning in a softer, more surface-level sense in the case of forecasting and in a more technical, use-based approach for examples such as public opinion polling. Use case examples and tools, such as the utilization of NLP, provide clear and actionable steps that any political decision maker could consider to increase overall functional effectiveness. Further exposure to machine learning applications in the U.S. political community could likely lead to desirable trends in increasing efficiency, decreasing labor, and increasing accuracy if these technologies, strategies, or applications are put into consideration.

Copyright: © 2024 International Society for Performance Improvement 2024

Contributor Notes

SYED ADEEL AHMED holds a BS in electronics & communication engineering from Osmania University and two MS degrees from the University of New Orleans in electrical engineering (MSEE) and engineering management (MSENMG). He is a Microsoft Certified Professional and Business Strategy Game Champion. Dr. Ahmed was awarded his PhD in engineering & applied sciences in 2006 from the University of New Orleans. He has published more than 40 top journal research papers and book chapters. Email: sahmed1@xula.edu

ANDREW MARK COSTANZO is an MBA and PhD student of engineering management at the University of New Orleans. Currently, he is the director of IT and data sciences for Vello in New Orleans, Louisiana. He is also a part of the NOLA Climate Reality chapter. Email: amcostan@my.uno.edu

  • Download PDF