Chiang Elements Of Dynamic Optimization Pdf Reader
Buy Elements of Dynamic Optimization by Alpha C. Chiang (ISBN: 117)Store. Free UK delivery on eligible orders. Buy Elements of Dynamic Optimization ✓ FREE SHIPPING on qualified orders. LibraryThing Review. User Review - Mandarinate - LibraryThing. Extremely well written presentation of calculus of.
Abstract Although of high relevance to political science, the interaction between technological change and political change in the era of Big Data remains somewhat of a neglected topic. Most studies focus on the concept of e-government and e-governance, and on how already existing government activities performed through the bureaucratic body of public administration could be improved by technology. This article attempts to build a bridge between the field of e-governance and theories of public administration that goes beyond the service delivery approach that dominates a large part of e-government research.

Using the policy cycle as a generic model for policy processes and policy development, a new look on how policy decision making could be conducted on the basis of ICT and Big Data is presented in this article. The policymaking process The concept of governance has been featured in many fields. Kjaer ( Kjaer, A. Cambridge, UK: Polity.
), for example, distinguished between governance in public administration and public policy, governance in international relations, European Union governance, governance in comparative politics, and good governance as extolled by the World Bank (Rhodes Rhodes, R. Understanding governance: Ten years on. Organization Studies 28(8): 1243– 1264. Doi: 10.11607076586., ).
For the purposes of this article, we focus on governance in the realm of public administration and public policy, using a general definition as provided by Rhodes: • Interdependence between organizations. Governance is broader than government, covering non-state actors. Changing the boundaries of the state means the boundaries between public, private, and voluntary sectors become shifting and opaque. • Continuing interactions between network members, caused by the need to exchange resources and negotiate shared purposes. • Game-like interactions, rooted in trust and regulated by rules of the game negotiated and agreed by network participants. • A significant degree of autonomy from the state. Networks are not accountable to the state; they are self-organizing.
Although the state does not occupy a privileged, sovereign position, it can indirectly and imperfectly steer network. In a wider sense, governance deals with “how the informal authority of networks supplements and supplants the formal authority of government.
It explores the limits to the state and seeks to develop a more diverse view of state authority and its exercise” (Rhodes Rhodes, R. Understanding governance: Ten years on. Organization Studies 28(8): 1243– 1264. Doi: 10.11607076586.,, 1247). This forces governments to change the traditional top-down command structure into a structure that includes negotiations with civil society, and to include the public in the decision-making process.
The boundaries between state and civil society are changing and becoming more porous, a development that has been accelerated by new modes of interaction and participation by means of ICT. Claus Offe described this collaboration between state and civil society as a cooperative network of “practitioners of governance, whoever they may be, [who] logically and politically can do without opposition, for all relevant actors are included” (Offe Offe, C. Governance: An ‘empty signifier’? Constellations 16(4): 550– 562.
Doi: 10.1111/j.1467-8675.2009.00570.x., 551). ICT is a key enabler of network formation and should be central to any contemporary analysis of governance.
The actual possibilities of implementation will vary from state to state due to different traditions and organizational cultures (Andersen and Eliassen Andersen, S. Making policy in Europe: The Europeification of national policy-making. London, UK: Sage Publications Limited.; Brans Brans, M.
Challenges to the practice and theory of public administration in Europe. Journal of Theoretical Politics 9(3): 389– 415. Doi: 10.1107., ), yet while there is no one-size-fits-all approach there are some generic models that can produce a framework of recommendations, which would be adaptable to state-specific circumstances. As we will show in the following parts of the article, it is especially the tried and proven model of the policy cycle, which we believe can be efficiently adapted to describe policymaking in the digital age. There is a growing consensus in the literature that the use of ICT is a necessity for countries that aim to improve their overall performance (Gupta and Jana Gupta, M. E-Government evaluation: A framework and case study.
Government Information Quarterly 20(4): 365– 387. Doi: 10.1016/j.giq.2003.08.002.,; Layne and Lee Layne, K., and J. Developing fully functional E-government: A four stage model. Government Information Quarterly 18(2): 122– 136.
Doi: 10.1016/S0740-624X(01)00066-1., ). The UK Office for National Statistics reported that private sector productivity increased by 14% between 1999 and 2013—compared to a decline by 1% in the public sector between 1999 and 2010 (Micklethwait and Wooldridge Micklethwait, J., and A. The fourth revolution: The global race to reinvent the State. New York: The Penguin Press., 19). Other advanced economies are in a similar crisis, with public administrations growing in terms of size and cost but not in terms of efficiency. A possible remedy could be that public policymaking is increasingly influenced by research and data-based intelligence gathering (Heinrich Heinrich, C. Evidence-based policy and performance management: Challenges and prospects in two parallel movements.
The American Review of Public Administration 37(3): 255– 277. Doi: 10.11007301957.,; Warren Warren, E. Market for data: The changing role of social sciences in shaping the law. Wisconsin Law Review 1: 1– 43.
) of government agencies. Despite the growing role of private sector data collectors like Google, governments have not fallen behind in this respect. As Alon Peled pointed out, “the public sector’s digital data troves are even bigger and growing at a faster rate than those in the private sector” (Peled Peled, A. Traversing digital babel: Information, E-government, and exchange. Boston: MIT Press., Kindle Location 562). Yet the abundance of this data has not led to the improvement of the public sector that one would have expected. The reason for this is the dilemma with which public administrations undergoing technological modernization are confronted: There is ample empirical evidence that the main driving force when it comes to innovation in the government sector is bureaucratic autonomy (Carpenter Carpenter, D.
The forging of bureaucratic autonomy: Reputations, networks, and policy innovation in executive agencies, 1862– 1928. Princeton, NJ: Princeton University Press.; Evans Evans, P.
Embedded autonomy: States and industrial transformation. Princeton paperbacks. Princeton, NJ: Princeton University Press.; Fukuyama Fukuyama, F. What is governance?
Governance 26(3): 347– 368. Doi: 10.1111/gove.2013.26.issue-3., ). Autonomy describes the discretion of public agencies in their decision-making processes.
While such autonomous decision-making power is often viewed as a positive characteristic, in the realm of Big Data it can pose a problem. Autonomous agencies also collect data individually and have an inclination not to share their information in order to stay independent. The failure of ICT projects in governments referred to earlier is caused in 80%–90% of cases by the unwillingness and inability of different government departments to share data-based information (Fawcett et al. Allred, and G. Supply chain information-sharing: Benchmarking a proven path. Benchmarking: An International Journal 16(2): 222– 246. Doi: 10.11.; Kamal Kamal, M.
IT innovation adoption in the government sector: Identifying the critical success factors. Journal of Enterprise Information Management 19(2): 192– 222.
Doi: 10.11.; Peled Peled, A. Traversing digital babel: Information, E-government, and exchange. Boston: MIT Press.
Peled demonstrates that it is not the data or the technology that is at the heart of the problem, but their application in a bureaucratic environment (Peled Peled, A. Traversing digital babel: Information, E-government, and exchange. Boston: MIT Press., chapters 1 and 2). At the same time, however, autonomy will also become more important: As this article will demonstrate in the following parts, one of the possible advantages of Big Data is the possibility of fast policy evaluation, allowing the responsible departments of public administrations to find out within a short time whether their policies have the desired effect or not. Given this opportunity of fast evaluation, public administrations would also need the necessary autonomy to quickly change the modes of policy implementation if the outcome is considered to be unsatisfactory and should be improved.
The goal is not a smoother operation of already existing services but a reformation of the policymaking structure itself. The nature of this structure and the suggested possible reforms are based on the idea of the policy cycle. The data domain The more quality and accurate information is available, the better the decisions will be. However, as Evgeniou and colleagues pointed out in HBR 2013, we should not be talking about Big Data making decisions better but about diverse data and using new technologies, processes, and skills to prevent the risk of drowning in Big Data.
Declining storage costs and increasing storage capacities following Moore’s law (Schaller Schaller, R. Moore’s Law: Past, present and future. IEEE Spectrum 34(6): 52– 59. Doi: 10.1109/6.591665., ) have led to an attitude of “no data lost” and subsequently to unlimited growth of data. Internal data is either data collected or produced to fulfill a task (in the case of the public administration: to carry out its legal obligations) or trail data obtained as a result from ICT processes such as interaction patterns with websites, workflow traces, authentication data, system survey data, etc. Data which is produced and subsequently stored as a byproduct of the execution of business processes has recently been identified as a valuable source for identifying (mining) or improving (reengineering) processes in an automated manner (Van der Aalst Van der Aalst, W. Process mining: Discovery, conformance and enhancement of business processes, 1st ed.
New York: Springer. Instead of relying on self-produced or self-collected data, external data, either openly available or bought from data brokers, can improve the quality of business processes. One prominent example of improving decisions by including external data is the first Netflix challenge of 2008, where the winning team “cheated” by incorporating external data into the system recommending movies to Netflix subscribers (Buskirk Buskirk, E. How the Netflix prize was won. Blog of WIRED Magazine. (accessed September 22). The greater part of the data available today is unstructured data.
Unstructured data is data for which no scheme exists or where the underlying scheme is unknown. In practical terms this means that for a computer system the effort to automatically derive meaningful insights is much higher than in the case of structured data. It seems natural that a vast amount of data is unstructured as unstructured data is much better suited to store knowledge than structured data. Therefore a considerable amount of time is spent on the reshaping of unstructured data into structured data in order to facilitate the automated processing through ICT systems. The problem of structured vs.
Unstructured data is aggravated by the fact that much data produced today originates from sensors built, for example, into smart phones and is contained in videos, images, or textual information exchanged in social networks. According to estimations presented at Oracles Open World conference in 2014, 88% of the data available today consists of unstructured, unannotated, nonlinked data. This fact led Davenport to the assessment that more than the amount of data itself, the unstructured data from the Web and sensors is a much more salient feature of what is being called Big Data (Davenport and Harris Davenport, T. Competing on analytics: The new science of winning, 1st ed.
Boston: Harvard Business Review Press. No amount of human effort would suffice to classify these huge volumes of data. This puts data mining combined with machine learning into the spotlight of CS research, which experiences a renaissance reminiscent of its heyday back in the 1980s and 1990s. The process domain The process domain with respect to Big Data actually has two facets: One is the technical facet, thus the ability to derive meaningful insights from data by algorithmically applying transformations in order to process the data. The second facet is business processes, which need to be adapted in order to maximize the friction-free flow and throughput of data throughout an organization. This involves the sharing of information between and collaboration of traditionally separated subentities.
In this section we will predominantly concentrate on the technical processes. Data per se has little value, unless it gets organized, processed, and interpreted in order to derive meaning. The organizing of data is performed by data management processes, whereas the processing of data is an analytical task. Typical tasks associated with data management encompass the storage, conversion, mapping, and filtering of data. In terms of analysis we differentiate between data mining (DM) and actual data analysis (DA) for the purpose of drawing conclusions about that data with the goal of identifying undiscovered patterns and hidden relationships (Coronel, Morris, and Rob Coronel, C., S. Morris, and P. Database systems: Design, implementation, and management.
Stamford, Connecticut: Cengage Learning., 690). Unlike traditional DA and DM tasks, however, in the domain of Big Data these algorithms have to meet special features: • Scalability. With data being divers (rich in variety) and arriving at varying and at high speed, algorithms have to be scalable.
Scalability is the ability of a system, network or process to handle a growing amount of work in a capable manner, or its ability to be enlarged to accommodate that growth (Bondi Bondi, A. Characteristics of scalability and their impact on performance.
In Proceedings of the 2nd International Workshop on Software and Performance, 195– 203. WOSP ’00, New York, NY, USA, ACM.
Doi: 10.1145/332. Linear scalability means, for example, if the amount of data doubles, the time required to process that data doubles, too. However, with network effects of Big Data becoming effective, linear scalability cannot be sustained as, for example, the number of comparisons of similarities between datasets grows exponentially. The HyperLogLog-Algorithm is one fascinating example of a Big Data-inspired approach to the seemingly easy problem of counting distinct things, as it generates very good results to the cardinality estimation problem yet only requires minimal time and space (Heule, Nunkesser, and Hall Heule, S., M.
Nunkesser, and A. HyperLogLog in practice: Algorithmic engineering of a state of the art cardinality estimation algorithm.
Proceedings of the EDBT 2013 Conference, Genoa, Italy, March. Scalability of a BDA system goes hand in hand with timeliness. • Timeliness. One characteristic of BDA is the ability to process large amounts of data in real time. Instead of loading data, organizing it, processing it, and presenting results, insights become available almost instantaneously the moment the data changes.
Data are no longer imported into a central data repository but is instead made available to BDA as a virtual data source, an approach chosen, for example, by the Hadoop Big Data processing ecosystem via the Hadoop Dispersed File System =(Malar, Ragupathi, and Prabhu Malar, K. Ragupathi, and G. The hadoop dispersed file system: Balancing movability and performance.
New York: International Journal of Computer Sciences and Engineering, Foundation of Computer Science. In the section on Continuous Evaluation in the E-Policy Cycle we will discuss timeliness and the effects on the policy cycle. • Organization.
In order to use data effectively, organization is required. The early days of data organization were characterized by locked-in datasets behind walled gardens. Increasing economic pressure has put the customer/citizen at the heart of considerations: first by sharing data, later interfaces were crafted to exchange information. Encapsulating data as a service has the advantage that only the required amount of information the authorized party is allowed to obtain gets transferred, and enables the data-providing agency to track and trace data demand at a fine grained level. The purpose domain In public administration data can be considered as input to processes aimed at gaining new insights to enable better regulations. A comprehensive coverage on BDA in public administration is provided by a study of the TechAmerica Foundation ( TechAmerica Foundation.
Demystifying big data—A practical guide to transforming the business of government, TechAmerica Foundation, Washington, DC. (accessed June 22)., 12), which identifies these fields of application: • Efficiency and administrative reform: optimization of administrative procedures through information preparation and automation of tasks. • Security and fight against crime: mission planning of fire brigades, ambulance and police units, fight against terrorism, fraud prevention. • Public infrastructure: healthcare system support such as detection of epidemics, diagnostics, therapy and medication; control of public and private transport, smart metering, energy, education. • Economy and labor: optimized management of the labor market, performance measurement of research funding, supervision of the financial market, food control and pandemic disease control. • Modernization of legislation: analysis of scenarios in legislation, trend analysis, complex impact assessment in real time, new forms of e-participation.
• Citizen and business services: usage of new technologies to enhance the quality and number of services provided by the public administration, new and enhanced services through interconnection of data and automation of processes. Chen, Mao, and Liu ( Chen, M., S.
Big data: A survey. Mobile Networks and Applications 19(2): 171– 209. Doi: 10.1007/s11036-013-0489-0., ) presented three main fields of BDA application in public administration, namely scientific exploration, regulatory enforcement and data as the basis of public information services. Asquer ( Asquer, A. The Governance of big data: Perspectives and issues. SSRN Scholarly Paper ID 2272608, Social Science Research Network, Rochester, NY.. ) further added improvements in societal insight into individual and society behavior for more fact-based decisions in politics and the economy.
The economic domain Assessing the financial benefit associated with Big Data, a May 2011 report of the McKinsey Global Institute predicts $300 billion annual value to the US health care system and €250 billion annual value to Europe’s public sector administration. The report also concludes that governments will be amongst those for which realizing benefits will be hardest, as technological barriers have to be overcome, personnel requires considerable additional training and organizational changes are overdue (Manyika et al.
Manyika, J., M. Roxburgh, and H.-B. Big data: The next frontier for innovation, competition, and productivity.
Washington, DC: McKinsey Global Institute.., 5, 10). Relevant experience in the private sector has shown undisputable, tangible benefits generated by the application of Big Data methods and technology. Companies in the top third of their industry employing Big Data mechanisms were on average 5% more productive than their competitors (McAfee and Brynjolfsson McAfee, A., and E.
2012, October. Big data: The management revolution. Harvard Business Review., ).
While these figures are tempting and may well serve as a door opener when it comes to convincing policymakers to invest in Big Data or to adjust funding schemes, a thorough and scientifically rigorous model which would account for the vast indirect revenue cycle associated with Big Data benefits is still missing. This is backed by Harvard fellow David Weinberger, who carefully argues that (1) models which are capable of capturing the nature of Big Data and would allow us to give a clear answer in terms of anticipated outcomes fail because the world is more complex than models can capture; and (2) computer simulations show how things work even when people may not completely understand why they work (Weinberger Weinberger, D. Too big to know: Rethinking knowledge now that the facts aren’t the facts, Experts are everywhere, and the smartest person in the room is the room. New York: Basic Books., 127). Opportunities Big Data technology can enable fragments of related yet heterogeneous information to be matched and linked together quickly and nonpersistently to identify yet undiscovered information flows. Hidden patterns and correlations will be identified to support common-sense experience or received wisdom.
Predictive analytics applied on top will increase the quality of scenario planning and result in true evidence-based policymaking. Because of organizational changes required to leverage the promised benefits of BDA, organizations will learn about how they work and how their customers/citizens use them, and will design services accordingly. BDA will help to identify areas of underperformance, support the reallocation of resources to their most productive use, and thus increase overall performance. This is facilitated by the possibility of analyzing multiple data sources and deducting patterns. As a consequence, the time required to produce reports will be reduced and may be devoted to performing more skilled kinds of analytics. For the citizens, BDA-improved processes will cut down paperwork as processes reorganized internally to better integrate data for analytics will facilitate cooperation among ICT systems, which reduces the need for citizens to repeatedly provide the same information. As a result, citizens will get questions answered, and receive benefits they are entitled to, more quickly.
Furthermore, services may be proactively proposed as a result of large-scale predictive analytics, based on services used by comparable citizens. Challenges Bringing data together do not come without a caveat.
Existing regulations concerning privacy and data protection have to be respected. The balance between socially beneficial uses of Big Data and the potential harm to privacy and other values is fragile. This raises intricate questions about how to ensure that discriminatory effects resulting from, for example, automated decision processes can be detected, measured, and redressed. Detailed knowledge about citizens makes it possible to forecast public behavior with high precision. This power requires responsible leadership and a system of checks and balances. The risk of a massive loss of informational privacy with respect to benefits has become much larger that there is no longer any excuse to negate tradeoff issues.
The government is required to pursue this agenda with strong ethics: Big Data holds much potential but it can put civil liberty under pressure. In order to leverage BDA effects, the organizational set-up has to be prepared for speed: Big Data is, inter alia, about volume and velocity, however, generally accepted attributes of government seldom include speed. Internally within the government, an attitude of openness is required to enable the aggregation of data beyond department borders, a challenge as great as that of making evidence-based, data-driven decisions the standard and preparing for an attitude of “good enough and failure.” For the government CIO, Big Data and related technology causes new challenges. Big Data and veracity go hand in hand with questions concerning data quality and bias. While the requested attitude of “good enough and failure” relativizes exactitude, data origin and trust are still matters of concern.
The multitude of possible external datasets as input to BDA also redefines the threshold between interoperability, standards, and heterogeneity. In future, Big Data-enabled ICT architecture will require even more integration adapters to connect legacy systems with BDA systems and cloud storage providers. This policy cycle consists of the seven stages of the original policy cycle as well as an additional feedback cycle. The first step is the agenda setting, where problems are identified and the need for action is formulated. This leads to a policy discussion aimed at identifying the right way to meet the problem defined at the agenda setting stage. The policy discussion will lead to increased public awareness and will therefore address not only the policy options but also the “conceptual foundations of the policy” and the motivations that led to the agenda setting in the first place.
As a result of the policy discussion the actual policies will be formulated and translated into legislative and executive language, followed by the actual adoption of the policy and the provision of the necessary (budgetary) means. These initial steps are followed by the actual implementation of the policy. Assuming that there is a performance expectation, the act of implementation will lead to an evaluation of the provision of means—are the means sufficient to actually implement the policy. Once the implementation has been accomplished, first an outcome evaluation will be performed to establish whether the implementation was successful, followed by a long-term evaluation that looks at the entire process from stage one, the point of agenda setting. The evaluation stage is the point at which the most profound behavior changes can be initiated, since the knowledge derived from evaluations will affect future behavior.
It is important to mention that decision and transaction costs are different at each stage, and that the later stages are less responsive to public opinion or outside expertise than the earlier ones (Tresch, Sciarini, and Varone Tresch, A., P. Sciarini, and F. A policy cycle perspective on the media’s political Agenda-setting power. Reykjavik, Island., 5). In the following chapter we will present areas of BDA application in the aforementioned policy cycle phases in the light of an ICT and data framework aimed at overcoming some of the shortcomings of the individual steps. This generic model should serve as a theoretical roadmap illustrating how the process of policymaking can be improved at each stage, from the agenda setting to the evaluation process, before we introduce and discuss our reinterpreted e-policy cycle.
Big data in the policy cycle In the preceding section we discussed government identity problems and the principles of governance with respect to the usage of ICT, and introduced the concept of evidence-based policymaking as a core principle of governance, which is increasingly shifting from process orientation toward performance orientation. We contextualized the role of data and information in policymaking and introduced Big Data in policymaking. In this section we will introduce specific fields of application of BDA in the policy cycle. Data and information are required as a basis for evidence, and the more high-quality information is available in a decision process the higher the quality of decisions will be. However, fixing the quality of total information while merely increasing attribute information will result in the decrease of decision quality (Keller and Staelin Keller, K.
Effects of quality and quantity of information on decision effectiveness. Journal of Consumer Research 14: 200– 213.
Doi: 10.1086/jcr.1987.14.issue-2., ). This relationship is a result of the information overflow, a failure of filters intended to separate noise from relevant information, given that the more data and information become available in a decision process, the more complex the decision model will get and the longer the decision making itself will take.
Given that hardly any information is ever deleted we witness an increasing reluctance to apply filters for the purpose of reducing a problem’s complexity. According to Big Data claims, potentially meaningful information lies within data declared as noise, yet advanced methodologies and algorithms for extracting this valuable information from noise are not in widespread use. As set out in the preceding section, Big Data, and specifically BDA, promises faster and better insights, given that correlations can be automatically deduced by the application of machine learning algorithms, data can be observed in its entirety, and analytical results theoretically become available instantaneously.
The ability to early react to adverse effects of a decision is a comparative advantage of applied BDA, which we will describe in relation to the policy cycle process steps. Agenda setting The key issue in respect of agenda setting is identifying the issues that will grasp the attention of policymakers. Despite the strong record of research on this question the results remain mixed (Dearing and Rogers Dearing, J. Thousand Oaks, CA: Sage Publications.; Rogers, Dearing, and Bregman Rogers, E. Dearing, and D. The anatomy of Agenda‐setting research.
Journal of Communication 43(2): 68– 84. Doi: 10.1111/jcom.1993.43.issue-2., ). A central role is most definitely played by the media, which have the ability to frame issues and spread relevant information. Especially in democratic systems this has a strong impact on actual agenda setting (McCombs and Shaw McCombs, M. Bostock And Chandler Pure Mathematics Pdf more. The agenda-setting function of mass media. Public Opinion Quarterly 36(2): 176– 87. Doi: 10.1086/267990.,; Scheufele Scheufele, D.
Framing as a theory of media effects. Journal of Communication 49: 103– 122.
Doi: 10.1111/jcom.1999.49.issue-1., ). There is a worrying aspect to this, for there is some evidence indicating that high-risk issues in the realm of environmental policy, for example, receive little attention and therefore only limited funding (Barkenbus Barkenbus, J.
1998, September. Expertise and the policy cycle. Energy, Environment, and Resources Center, University of Tennessee, Knoxville, Tennesse., 3). Scientific expertise, on the other hand, seems to be only of minor importance, and “scientific research results do not play an important role in the agenda-setting process” (Dearing and Rogers Dearing, J.
Thousand Oaks, CA: Sage Publications., 91). Nevertheless scientific evidence does play a part in the process—although the media might push an issue, most likely they will base their reporting on some form of scientific expertise to legitimize their choice of issues (Barkenbus Barkenbus, J. 1998, September.
Expertise and the policy cycle. Energy, Environment, and Resources Center, University of Tennessee, Knoxville, Tennesse.
Additionally, the interest formulated by the media forces politicians to act in order not to be seen as indifferent to a topic that is gathering widespread public attention. In a world of evolving digital media and online publics, the dynamics of issue agendas are becoming more complex. The emergence of social media has generated renewed attention for the reverse agenda-setting idea. With a few keystrokes and mouse clicks any audience member may initiate a new discussion or respond to an existing one with text, audio, video, or images. One way for governments to early identify emergent topics and generate relevant agenda points would be to collect data from social networks with high degrees of participation and try to identify citizens’ policy preferences, which can then be taken into account by the government in setting the agenda. Yet such possibilities would have to be used with extreme caution, in the light of the knowledge that social media activity can influence policy decisions and change the behavior of citizens—thereby possibly distorting the actual salience of an issue for the general public and overemphasizing the concerns of a vocal minority (Lazer et al. The parable of google flu: Traps in big data analysis.
Science 343(6176): 1203– 1205. Doi: 10.1126/science.1248506.,, ). Online news sources which represent the online face of traditional broadcast and print media dominate public attention to news online. A large-scale survey carried out by Neuman and colleagues using BDA (Boolean search in Big Data to determine patterns of issue framing and parallel time-series analysis) showed that the public agenda as reflected in the social media is not locked in a slavish or mechanical connection to the news agenda provided by the traditional news media. The social media spend a lot more time discussing social issues such as birth control, abortion, and same-sex marriage, while they are less likely to address issues of economics and government functioning. However, the survey also critically noted the problematic practice of simply equating online tweets, blogs, and comments with “public opinion” in general, given that social media users are not demographically representative and diverse social media platforms undoubtedly develop local cultures of expressive style, which influence the character of what people choose to say (Russell Neuman et al.
Russell Neuman, W., L. Guggenheim, S. Mo Jang, and S. The dynamics of public attention: Agenda-setting theory meets big data: Dynamics of public attention. Journal of Communication 64(2): 193– 214. Doi: 10.1111/jcom.12088.,, 196 and 210).
While using automated and large-scale analysis of news outlets has made it possible to predict events connected to the Arab Spring and events in Eastern Europe, sometimes surpassing the predictive power of traditional models of foreign policy (Leetaru and Schrodt Leetaru, K., and P. GDELT: Global data on events, location, and tone, 1979–2012. Paper Presented at the ISA Annual Convention 2: 4.; Leetaru Leetaru, K. Culturomics 2.0: Forecasting large-scale human behavior using global news media tone in time and space. First Monday 16(9). Doi: 10.5210/fm.v16i9.3663.
), the broad spectrum of issues to be dealt with cannot be covered by solely relying on public agenda setting. BDA technologies are not only capable of predicting events in the realm of foreign policy but also in the domestic realm. This is demonstrated by countries like China or Singapore, which, rather than simply banning political discussions, observe and quantify them in order to gain information about the policy preferences of their citizens and use them as early warning systems for potential political unrest (King, Pan, and Roberts King, G., J.
How censorship in China allows government criticism but silences collective expression. American Political Science Review 107(2): 326– 343.
Doi: 10.1017/S000014., ). Policy formation and policy acceptance The formation of a policy is the description of steps that are supposed to be undertaken in the implementation phase. The two stages of policy formation and policy acceptance have a different relation to (Big) Data as they are strongly anchored in the legal framework of government conduct. Big Data, however, can play a role in the realm of evaluation.
Once the policy has moved from the discussion phase to the policy formation phase, policy documents can be scrutinized and governments can adopt or shape actual policies according to public demands. Especially in democracies the credibility and legitimacy of new policies is important, so it will be a useful undertaking to use means of data collection to investigate the acceptance of specific polices among different groups of society. The term acceptance in the digital age can be expanded beyond the political act of voting by political representatives to also refer to the general acceptance within the population.
In the policy formation and acceptance cycle stages, BDA can contribute to evidence-based policy making by advanced predictive analytics methodologies and scenario techniques. One example of this would be the use of BDA to analyze and prevent the spread of disease (Harris Harris, S. The social laboratory. Foreign Policy.
(accessed July 22). Government and administration decision making is often characterized by a very high number of independent variables and conflicting target functions. Better regression algorithms are needed to handle these high-dimensional modeling challenges. Thankfully, regression analysis as a cornerstone of predictive analytics does not rely exclusively on ordinary least squares (OLS). Advances in statistic modeling have resulted in algorithms, which are much better suited to describe “the real world” instead of resorting to artificial assumptions. The predictive modeling problem can be described by imagining N entries that are associated with N outcome measurements, as well as a set of K potential predictors.
In many cases the information about each entry is rich and unstructured, so there are many possible predictors that could be generated. Indeed, the number of potential predictors K may be larger than the number of observations N. An obvious concern is overfitting: with K >N it will typically be possible to perfectly explain the observed outcomes, but the out-of-sample performance may be poor. The Ridge Regression is, for once, a modeling technique that works to solve the multicollinearity problem found in OLS. Multicollinearity occurs where two or more predictor variables in statistic modeling exhibit high correlation. Traditional regression approaches as input to predictive models would involve high error values and render predictions highly volatile. The application of Ridge Regression to overcome this phenomenon, however, comes at a price as it is computation heavy (Zou, Hastie, and Tibshirani Zou, H., T.
Hastie, and R. Sparse principal component analysis.
Journal of Computational and Graphical Statistics 15(2): 265– 286. Doi: 10.1106X113430., ). Another novel algorithm used in Big Data predictive analytics is Elastic Net. As a learning algorithm it will help to set up a well-defined, parameterized model as an input to simulation data on a large dataset without overfitting.
Ridge Regression and Elastic Net are interesting in a BDA scenario as they can operate on very large datasets and computations can be parallelized, taking advantage of vector machines and cloud infrastructure. However, despite all algorithmic advances towards achieving more accurate simulation models, explanation and prediction is more difficult for policy interventions than for the selling and pricing of books. This holds especially true for national and international policymaking, for example, climate change rather than potholes (data4policy.eu data4policy.eu. Data for policy: Big data and other innovative data-driven approaches for evidence-informed policy making. Presented at Policy-making in the Big Data Era: Opportunities and Challenges, Cambridge. (accessed June 16).
Provision of means Similarly to the previous two stages, decisions on how to most efficiently provide the required personnel and financial means for the implementation of new policies can be improved if previous experiences can be analyzed in detail. Budgetary processes provide amounts of data that can enable the detection of patterns, which can then be used to design more efficient and effective ways to build a budget for a policy.
A more transparent and more performance-oriented provision of means could once again be a source of legitimacy for political systems and governments. Additionally, Big Data will also potentially enable the testing of new ways of revenue-neutral financing for policies. The ability to geographically pinpoint problem areas and the possible calculation of savings and new revenues resulting from the potential resolution of the problem could make it easier to gather support for certain policies, while making the rejection of others more likely. There is already some empirical evidence that the use of big Data in budgeting can increase efficiency and effectiveness while reducing costs (Manyika et al. Manyika, J., M. Roxburgh, and H.-B.
Big data: The next frontier for innovation, competition, and productivity. Washington, DC: McKinsey Global Institute.. The availability of more data facilitates a shift toward outcome-oriented budgeting and the creation of evaluation frameworks that could funnel resources to where they are most needed, and not only to the areas to which they used to be allocated in previous periods due to the emergence of a “budgeting tradition” according to which some areas are granted higher budget resources because they spent more in the past and now occupy the pole position when it comes to the allocation of funds. Funding decisions could ideally be increasingly based on the estimated impact of area-specific spending, thus reducing the allocation of funds based on the political influence of powerful government agencies. Funding needs would be determined dependent on the estimated impact on the basis of available data and evaluations of previous policies, creating a feedback loop that could help identify and discontinue unsuccessful policies and distribute more resources to successful ones. Big Data can also play a productive role in the procurement process: Thanks to improved ways of checking the records of possible partners from the private sector public agencies are enabled to identify the best possible cooperations, while data analytics can already enable improvements in tax compliance and the avoidance of, for example, welfare benefits fraud. All these measures will improve the financial situation of public agencies and will allow funds to be used increasingly for problem solution rather than for maintaining the administrative apparatus.
Implementation The implementation of policies could be influenced by Big Data in two ways: First, the ability to pinpoint problem zones could be a way to implement different levels of policy intensity. For example, an increase in policing can be focused more precisely on problem areas, thereby reducing the occurrence of crime at the point of its origin. Second, the very execution of new policies will almost immediately produce new data, which can then be used to evaluate the effectiveness of these policies and enhance future implementation processes by identifying problems with previous ones. As will be demonstrated in connection with the evaluation stage, it is especially the new dimension of evaluation that will probably have the most significant effect on the different stages of the policy cycle. The production of data about the implementation of policies not after but during the implementation can create an unprecedented flexibility when it comes to the transformation of policy ideas into actually executable policies. For example, a new redistributive tax code could be tested almost in real time as to whether it has the desired effect or modification will be necessary. As has been mentioned earlier in this article, this would also mean increased autonomy for public administrations to enable them to react as quickly as possible to incoming evaluation results.
Additionally, some of the fundamental sources of information for the implementation of policies can be increased in accuracy by Big Data. Census data, for example, often runs the risk of being out of date at the time it is used for the process of formulating and implementing new policies.
Through the combination of several databases, however, census data could be produced on an almost daily, rolling basis instead of being updated only once or twice a decade. Demographic data, unemployment numbers or migration patterns could be observed in real time, enabling a much faster assessment of whether the implementation of a certain policy was a success or not. The inclusion of external data would be a promising further step either to enhance existing authoritative government data or to improve the data cross-check process in view of increased validity. Continuous evaluation in the e-policy cycle The attentive reader will have noticed that the section on applications of BDA in the policy cycle misses the evaluation part—for good reason.
The various analytic capabilities of Big Data all apply to evaluation. But instead of enumerating and discussing BDA in policy evaluation we suggest a more radical and novel approach: A redesigned policy cycle, which takes account of advances in ICT and, specifically, the analytical capabilities provided by Big Data. Textbooks differentiate between formative and summative evaluation, with the main difference being qualitative vs. George Grant Lament For A Nation Pdf File. Quantitative, as well as the policy cycle stage at which evaluation is performed.
Hudson und Lowe ( Hudson, J., and S. Understanding the policy process: Analysing welfare policy and practice. Bristol, UK: Policy Press. ) claim that the rational view of evaluation is retrospective, summative judgments dominate and experimental research is regarded as the gold standard. As a response to this rational model a bottom-up approach to evaluation has emerged, which is formative, based on qualitative evidence and includes the active participation of stakeholders, with feedback appearing as the policy is being rolled out instead of after policy implementation (Parsons Parsons, D.
Public policy: An introduction to the theory and practice of policy analysis. Aldershot, UK: Edward Elgar Publishing.
The traditional policy cycle is characterized by evaluation happening at the very end of policymaking yet with early exits to preceding process steps the very moment failure becomes apparent. However, in this scenario adjusting the set agenda is risky as in the pre-Big Data era, the speed at which evaluations were delivered by traditional BI systems was generally not fast enough to justify early breakouts from the policy cycle. A distinctive feature of the Big Data toolbox is the possibility of real-time processing.
One advantage of instantaneous or near-instantaneous data processing is that evaluation results become available the very moment data arrives. This enables a new view on the policy cycle, namely that of continuous evaluation (). BDA enables evaluation, instead of being a well-defined process step at the very end of the policy cycle, to happen at any stage and to happen opaque to the affected stakeholders. Thus we propose a newly shaped policy cycle in which evaluation does not happen at the end of the process but continuously, opening permanent possibilities of reiteration, reassessment, and consideration. This will remove evaluation from its place at the end of the policymaking process and instead make it an integral part of every other policymaking step. This is a new feature enabled by Big Data Analytics, and we name it the e-policy cycle.
Our approach of continuous evaluation harmonizes the views of summative and formative evaluation: Continuous evaluation in the e-policy cycle is formative as it is performed throughout the policy process, and summative as it is based on rational models. With advances in Big Data, use cases of continuous evaluation in public administration/the government become available. The US Army, for example, is testing a program called Automated Continuous Evaluation System. Utilizing BDA solutions and context aware security, the system analyzes government, commercial, and social media data to uncover patterns relating to US army applicants. In 21.7% of cases the program revealed important information the applicant had not disclosed, like serious financial problems, domestic abuse, or drug abuse. This use case presents only a very limited view of the possibilities generated by continuous evaluation (Executive Office of the President Executive Office of the President. Big data: Seizing opportunities, preserving values.
Washington, DC: The White House.., 36). Another example of continuous evaluation is the UK Government Program on Performance Data. Instead of merely providing open government data from policy programs at various stages of the policy cycle, visualizations provide a more accessible experience with respect to the evaluation of government policy making.
The program is still in beta, changes are likely to happen, and many of the visualizations lack the capability to further drill down into the data the visualization has been based on. However, the UK performance program serves as another example of applied continuous evaluation.