Self-Service Data Analytics and Governance for Managers. Nathan E. Myers
Читать онлайн книгу.to be excited about. If an innovation is developed in one part of their organization that has wide applicability and opportunities for replication across the shop, it is their responsibility to put in place an overarching clearinghouse apparatus to capture and scale these opportunities. Perhaps most critical of all is that executives feel convinced that risks are well documented and understood across the enterprise, and that strong policies and procedures are in place to guide the organization in active risk management.
In Chapter 5, we will discuss further the need to address risk through active risk governance, as operators themselves develop processing solutions with the employ of self-service data analytics tools.
Arguments for Self-Service Data Analytics Tooling
The data analytics toolkit is growing at a rapid pace, with many off-the-shelf tools that can be customized to perform routinized processing tasks. By shoehorning an unstructured process into a self-service data analytics tool, analysts and operators can structure work into a repeatable process that is stable, documented, and robust – even tactically mimicking a system-based process. Self-service analytics is a form of business intelligence (BI) in which line-of-business professionals are enabled to perform queries; extract, transform, and load (ETL) and data enrichment activities; and to structure their work in tools, with only nominal IT support. Self-service analytics is often characterized by simple-to-use BI tools with basic analysis capabilities and an underlying data model that has been simplified or scaled down for ease of understanding and straightforward data access.
Earlier in this chapter, in the section Employee/Analyst/Operator Perspective, we described the plight of end-users who spend a disproportionate amount of their time performing data staging, data preparation, and routinized processing activities, instead of spending their time gleaning meaning and trends from their outputs through value-added analysis. We discussed that they may have little influence over the prioritization queue for technology demand items, let alone an ability to influence the budgeted dollar amounts approved during the annual technology investment cycle, often leaving their efficiency needs unmet by core technology. We discussed the overlapping, but slightly different perspective of managers, surrounding the need to increase control and reduce processing variance and failures, by structuring work in tools. We also discussed their motivation to capture efficiency, in order to meet additional demands being placed on resource-constrained departments. Here again, managers are often at the mercy of the technology investment cycle budgets and priorities, which may be likely to leave their needs unmet in the short term. Finally, we discussed the landscape from the perspective of C-suite and divisional executives, who wish to minimize the number and impact of highly publicized catastrophic processing failures and the number of audit points levied by internal and external auditors, and who could be enticed to embrace any edge offered in strategic decision-making that paves the way for organizational success. The authors submit that a program of “small” automation through self-service data analytics could serve the needs of all of these stakeholders.
End-user analytics tools and business intelligence tooling can be readily deployed to automate small bits and pieces of processes in and around systems. Importantly, the involvement of core technology teams is not required to build them, as they would be for a far larger application rollout. When vendor software licensing costs are weighed up against time savings, the average cost of employees, and the additional productivity that can be enjoyed as a result of tool deployment, a significant return on investment (ROI) is evident. End-user tooling can be engaged by virtually everyone in an organization that is able to identify appropriate use cases and to navigate the increasingly accessible and user-friendly functionality.
Operators and analysts can target the low value-added steps in their processing chain for analytics-assisted automation, allowing them to realize efficiency benefits in short order, even while strategic change requests work their way through the backlog and “wait” queues. Managers gain from the structured, stabilized, and regimented processing that results from centralizing processing steps in a tool. They can improve process controls, while improving cycle time and building capacity. Finally, executive-level strategic leadership can directly benefit from widespread adoption of self-service data analytics. From reduced client and regulatory impact of failed processing incidents to improved audit results, from capturing efficiency to sourcing descriptive and predictive information to improve decision-making – all of these arguments will be persuasive to division-level executives and functional heads. As self-service analytics champions, these leaders can do much to instill a proactive and empowered mindset across the organization. They can influence the reallocation of core technology investment budget dollars to the funding of a centrally sponsored data analytics program. Perhaps most importantly, they can promote and encourage innovative thinking throughout the enterprise.
Need for Self-Service Data Analytics Governance
Having set the stage, it is now appropriate to introduce one of the key topics of this book, which is the need for strong self-service data analytics governance. Many readers may already have begun to replace their spreadsheet-based end-user computing (EUC) tooling with tactical data analytics tools. We have already discussed the significant benefits available in putting flexible, user-configurable tools into the hands of users. Once the seal has been broken, expect widespread deployment at scale.
Skip ahead two years and suddenly you feel exposed. Which builds are being relied upon by regulators? Which builds are relied upon by customers? Did the individual who put them in place have adequate knowledge of the underlying processes to build reliably and effectively? Were they well versed with the data analytics tools and technologies deployed? Were such builds adequately tested? Precisely how many builds exist across the organization? If key software vendors raise the price of basic licenses, is any of the work salvageable for migration to a new platform? You are being challenged by key internal clients on the quality of the financial deliverables that your team prepares, but you learn your team has simply been taking analytics build outputs at face value. They no longer understand the longhand processing steps that have been automated, as the team has experienced significant turnover over the last two years. This has resulted in the tools effectively becoming “black boxes,” where the transformation steps embedded in them are obscured and difficult to decipher. You fear that your organization has fallen into a common trap; by moving away from regimented technology release cycles toward a decentralized change model, you have lost control.
Governance, or lack thereof, is perhaps the strongest harbinger of control and stability, in an environment where self-service data analytics is prevalent. Effective governance is particularly critical due to the expected growth pattern of data analytics adoption, once the floodgates are opened. Without the benefit of governance to keep pace with the decentralization of development capabilities, organizations can find themselves struggling to demonstrate process effectiveness; they may not have clear visibility into the degree to which they are dependent on off-the-shelf software applications; they may lack adequate information upon which to base risk assessments; or they may get it abjectly wrong. Governance must provide guidelines aimed at ensuring the quality and integrity of processing inputs; that processing solutions implemented are appropriate, adequately tested, and operate effectively; that minimum standards of project documentation are met; and that risk assessment and mitigation activities can be demonstrated in the thoughtful deployment of analytics tooling.
The shift from centralizing processing within systems to the decentralized development model, where end-users are equipped to independently source data and to flexibly structure processing without the involvement of IT, necessitates a commensurate shift in controls. In the past, the controls safeguarding the enterprise from various IT general and application risks were centralized around the core technology stack. With the advent of self-service data analytics tools, increased development capabilities are placed directly into the hands of end-users. Controls embedded in systems are rendered irrelevant, to the extent that processing is done outside of them. This evolution has dramatically shifted the risk environment.
Effectively, the robust governance that was built around systems has been side-stepped, now that systems are