Sunday, June 28, 2020

System development life cycle (SDLC) for Organization's dynamic Growth

What is System Development Life Cycle?www.gmsisuccess.com

What is the Systems Development Life Cycle?

The system development life cycle is a long-term embedded concept in software engineering and in the world of Information Technology. A system is an information technology component (hardware, software, or a combination of the two). In software development, a component integrates with other software components to create a full-fledged system.
The  system development life cycle (SDLC), also commonly known as the application development lifecycle, is a multistep, iterative, and structured process that encompasses the activities of planning, analysis, designing, building, testing, deploying, and maintaining an information system. 
SDLC has been transformed to meet ever-changing needs in a complex atmosphere and under unique circumstances. When talking about an information system, we must recognize that it includes both hardware and software configurations, which is why the SDLC encompasses these two components and usually covers these 7 phases: planning, analysis, design, development, testing and integration, implementation, and maintenance. Additionally, it covers activities such as documentation and evaluation.

The essence of the system development life cycle is to deliver high-quality information systems that meet and/or exceed client expectations as they flow through pre-defined phases, within given timeframes and budget.

The system development life cycle is oftentimes confused with the software development lifecycle, but while they share remarkable similarities, the development of information systems is relatively more complex and robust in its overall architecture.

Given the complexity of the method, there are numerous methodologies out there that help manage and control the system development process. Among these methodologies, we can find Waterfall, Agile, rapid prototyping, incremental, and more.

The system development life cycle helps alleviate the complexity of developing a system information system from scratch, within a framework of structured phases that help shape the project and manage it easily.

It's important to have a system development life cycle in place because it helps transform an idea project into a functional and fully operational system. The SDLC, apart from covering the technical aspects of an information system’s development,also encompasses activities such as process and procedure development, change in management, user experience, policy development, impact, and conformity to security regulations.

Another important reason for leveraging a system development life cycle is to plan ahead of time and analyze the structured phases and goals of a specific software system project. Goal-oriented processes don’t follow a one-size-fits-all methodology; instead, they adapt and are  responsive to user needs, which is why it is important to have a well-defined plan to determine costs and staffing decisions, provide goals and deliverables, measure performance, and apply validation points at each phase of the life cycle to improve quality.

Main System Development Life Cycle Phases

As we covered before, the SDLC is used as a conceptual model that includes the procedures and policies necessary to develop or alter a system throughout a life cycle. The end result should be a high-quality system that meets or exceeds customer expectations and is within time and budget constraints. Generally, software development cycle presupposes 7 following steps:

  • Planning
  • Feasibility analysis
  • Product design
  • Coding
  • Implementation and Integration
  • Software Testing
  • Installation and Maintenance
  • 1. Planning

    It is the phase of brainstorming when specialists gather requirements and analyze all the aspects of a future software product. The developers should understand the clients’ requirements, namely, what exactly they want and what issues can occur in the development process. This stage involves communication between stakeholders, project team, and users.

    2. Feasibility analysis

    At this step, the project team defines the entire project in details and checks the project’s feasibility. The team divides the workflow into small tasks, so that developers, testers, designers, and project managers can evaluate their tasks. They define whether it’s feasible in terms such of cost, time, functioning, reliability etc.

  • 3. Software Design

    The software design is the major aspect of software development services cycle. Design should be creative and clear. It involves overall product design along with data structure and database design. Software designing uses many different strategies.

  • 4. Programming

    This is the critical phase of SDLC. A lot of brains work for coding and deliver the desired software. Usually, a company assigns a team of programmers for a particular project. The tasks are subdivided into sub-phases called Task Allocation, so every coder has their own task.

  • 5. Implementation and Integration

    Normally software contains a great number of programs, which require careful implementation and step-by-step integration of the software product. During this software stage, the project team checks whether the software product runs on various systems. In case of bugs, testers fix them.

    6. Software Testing

    After completing of coding, the software is sent to the testing department. The work of testers plays the crucial role for the quality of software and its performance. Quality Analysts test software using various test cases.  Before the launch, a product needs verification which includes software testing and debugging done by testers. When testing department ensured that software is error-free, it goes to the next stage.

  • 7. Installation and Maintenance

    Finally, the software is handed over to the clients to be installed on their devices. After the installation, if the client needs any modification, the product is to come under the maintenance process.

    The featured stages of software development procedure are followed by the majority of IT companies in order to provide high-quality services in the development of all sorts of software. SDLC can be shaped depending on the project requirements. Agile methodologies and Scrum offer the bigger amount of flexibility and cross-functional teams.


  • System Development Methodology

    There are numerous SDLC methodologies available and the real beauty in this sea of options lies in selecting the best System Development methodology for a unique project. Each system development methodology carries its characteristic set of pros and cons that must be weighed to assertively decide which one will yield the best results for an information system development project.

  • According to Techopedia, “various SDLC models have been created and can be implemented, including Waterfall, Rapid Prototyping, Incremental, Spiral, Fountain, Build and Fix, Synchronize and Stabilize, and Rapid Application Development (RAD)”. Next, we are going to list some of the most prominent SDLC methodologies available.

    • Waterfall: Known by many as the traditional methodology, Waterfall is a sequential and linear flow used to develop a system software application. In Waterfall, the process is outlined by a series of finite stages and each one must be fully completed before moving on to the next one. The Waterfall approach follows this order: requirements, design, execution, testing, and release.
  • Rapid application development (RAD): It is an adaptive approach that puts less emphasis on planning and more emphasis on an adaptive process. Oftentimes, prototypes are used in RAD to substitute design specifications. RAD is considered one of the most popular SDLC models for software that is driven by user interface requirements. From its origin, RAD was created as a response to the plan-driven Waterfall methodology that designs and builds things almost as structured as done with a building. RAD is all about fast prototyping and iterative delivery that falls into the parental category of Agile. 
  • Prototyping: This methodology creates prototypes of the software application to simulate the functioning aspects of a desired final product. Prototyping is mainly used to visualize components of the software solution to ensure the final product meets customer requirements.  There are several variants of prototyping but they are mainly categorized into throwaway and evolutionary. Throwaway prototyping creates a model that will eventually be discarded and evolutionary prototyping refers to a robust prototype that will be constantly refined to reach its final version. 
  • Spiral: The spiral methodology can be thought of as a combination of the Waterfall methodology and the prototyping methodology. It is typically the methodology of choice for large and complex projects because it uses the same stages as the Waterfall methodology but it separates them into planning, risk assessment, and prototype building.
  • Agile: The iterative and incremental methodology recognized for excellence, Agile is a framework that evolves through collaboration between teams. It is a dynamic and interactive methodology that works in sprints with a defined duration to produce lightweight deliverables that help reduce the time in which software is released. It advocates for adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible responsiveness to changes.
  • Iterative and incremental: The iterative and incremental methodology is designed to overcome any fault or shortcoming of the Waterfall methodology. The iterative and incremental methodology begins with initial planning and ends with the deployment of the solution, with cyclic interaction in between. In essence, it develops a software application via iterative and repeated cycles that are performed incrementally so developers can learn from the development of previous portions of the software.
  • V model: This methodology is considered an extension of the Waterfall methodology, but instead of flowing down in a linear way, the steps are designed upward to form a V shape. In this methodology, the relationships between each phase of the development life cycle are associated with a testing phase. The horizontal and vertical axes display the time or project completeness (left to right) and abstraction level (coarsest-grain abstraction).

These methodologies can be combined to build a hybrid solution that better meets a specific project’s requirements. Usually, organizations rely on the expertise provided by System Analysts to decide and select the best methodology or combination of methodologies to use for a specific project. In the following section, we are going to explore the System Analyst role and how their valuable skill set has become a key component in the success of effective System Development Life Cycle projects.

SDLC- System Development Life Cycle | Systems development life ...

Tuesday, June 9, 2020

How Blockchain Works?

5 Ways Blockchain Technology Will Change the Way We Do Business
www.gmsisuccess.com

Blockchain, sometimes referred to as Distributed Ledger Technology (DLT), makes the history of any digital asset unalterable and transparent through the use of decentralization and cryptographic hashing.  


If you have been following banking, investing, or cryptocurrency over the last ten years, you may be familiar with “blockchain,” the record-keeping technology behind the Bitcoin network. And there’s a good chance that it only makes so much sense. In trying to learn more about blockchain, you've probably encountered a definition like this: “blockchain is a distributed, decentralized, public ledger."

“Blocks” on the blockchain are made up of digital pieces of information. Specifically, they have three parts:

  1. Blocks store information about transactions like the date, time, and dollar amount of your most recent purchase from Amazon. (NOTE: This Amazon example is for illustrative purchases; Amazon retail does not work on a blockchain principle as of this writing)
  2. Blocks store information about who is participating in transactions. A block for your splurge purchase from Amazon would record your name along with Amazon.com, Inc. (AMZN). Instead of using your actual name, your purchase is recorded without any identifying information using a unique “digital signature,” sort of like a username.
  3. Blocks store information that distinguishes them from other blocks. Much like you and I have names to distinguish us from one another, each block stores a unique code called a “hash” that allows us to tell it apart from every other block. Hashes are cryptographic codes created by special algorithms. Let’s say you made your splurge purchase on Amazon, but while it’s in transit, you decide you just can’t resist and need a second one. Even though the details of your new transaction would look nearly identical to your earlier purchase, we can still tell the blocks apart because of their unique codes.
A QUICK OVERVIEW
  1. Digital assets are distributed instead of copied or transferred.
  2. The asset is decentralized, allowing full real-time access.
  3. A transparent ledger of changes preserves integrity of the document, which creates trust in the asset.

How Does Blockchain Work?


The whole point of using a blockchain is to let people — in particular, people who don't trust one another — share valuable data in a secure, tamperproof way.

Blockchain consists of three important concepts: blocks, nodes and miners.

How Blockchain Works

When a block stores new data it is added to the blockchain. Blockchain, as its name suggests, consists of multiple blocks strung together. In order for a block to be added to the blockchain, however, four things must happen:

  1. 1. A transaction must occur. Let’s continue with the example of your impulsive Amazon purchase. After hastily clicking through multiple checkout prompt, you go against your better judgment and make a purchase. As we discussed above, in many cases a block will group together potentially thousands of transactions, so your Amazon purchase will be packaged in the block along with other users' transaction information as well.
  2. 2. That transaction must be verified. After making that purchase, your transaction must be verified. With other public records of information, like the Securities Exchange Commission, Wikipedia, or your local library, there’s someone in charge of vetting new data entries. With blockchain, however, that job is left up to a network of computers. When you make your purchase from Amazon, that network of computers rushes to check that your transaction happened in the way you said it did. That is, they confirm the details of the purchase, including the transaction’s time, dollar amount, and participants. (More on how this happens in a second.)
  3. 3. That transaction must be stored in a block. After your transaction has been verified as accurate, it gets the green light. The transaction’s dollar amount, your digital signature, and Amazon’s digital signature are all stored in a block. There, the transaction will likely join hundreds, or thousands, of others like it.
  4. 4. That block must be given a hash. Not unlike an angel earning its wings, once all of a block’s transactions have been verified, it must be given a unique, identifying code called a hash. The block is also given the hash of the most recent block added to the blockchain. Once hashed, the block can be added to the blockchain.

Blockchain vs. Bitcoin

  1. The goal of blockchain is to allow digital information to be recorded and distributed, but not edited. That concept can be difficult to wrap our heads around without seeing the technology in action, so let’s take a look at how the earliest application of blockchain technology actually works.

    Blockchain technology was first outlined in 1991 by Stuart Haber and W. Scott Stornetta, two researchers who wanted to implement a system where document timestamps could not be tampered with. But it wasn’t until almost two decades later, with the launch of Bitcoin in January 2009, that blockchain had its first real-world application.

    The Bitcoin protocol is built on the blockchain. In a research paper introducing the digital currency, Bitcoin’s pseudonymous creator Satoshi Nakamoto referred to it as “a new electronic cash system that’s fully peer-to-peer, with no trusted third party.”

KEY TAKEAWAYS

    • Blockchain technology underlies cryptocurrency networks, and it may also be used in a wide variety of other applications as well.
    • Blockchain networks combine private key technology, distributed networks and shared ledgers.
    • Confirming and validating transactions is a crucial function of the blockchain for a cryptocurrency.
    Making sense of bitcoin and blockchain: PwC










Saturday, June 6, 2020

Progression of Data, from data to information to knowledge to insight to action

Data, information, content and knowledge: from input to value
Progression of Data, from data to information to knowledge to insight to action:

The Data-Information-Knowledge-Wisdom (DIKW) hierarchy, or pyramid, relates data, information, knowledge, and wisdom as four layers in a pyramid. Data is the foundation of the pyramid, information is the next layer, then knowledge, and, finally, wisdom is the apex. DIKW is a model or construct that has been used widely within Information Science and Knowledge Management. Some theoreticians in library and information science have used DIKW to offer an account of logico-conceptual constructions of interest to them, particularly concepts relating to knowledge and epistemology. In a separate realm, managers of information in business process settings have seen the DIKW model as having a role in the task meeting real world practical challenges involving information.


Data is conceived of as symbols or signs, representing stimuli or signals. Information is defined as data that are endowed with meaning and purpose. Knowledge is a fluid mix of framed experience, values, contextual information, expert insight and grounded intuition that provides an environment and framework for evaluating and incorporating new experiences and information. It originates and is applied in the minds of knowers. In organizations it often becomes embedded not only in documents and repositories but also in organizational routines, processes, practices and norms. Wisdom is the ability to increase effectiveness. Wisdom adds value, which requires the mental function that we call judgment. The ethical and aesthetic values that this implies are inherent to the actor and are unique and personal.
Knowledge Pyramid, Wisdom Hierarchy and Information Hierarchy are some of the names referring to the popular representation of the relationships between data, information, knowledge and wisdom in the Data, Information, Knowledge, Wisdom (DIKW) Pyramid.
Like other hierarchy models, the Knowledge Pyramid has rigidly set building blocks – data comes first, information is next, then knowledge follows and finally wisdom is on the top.
Each step up the pyramid answers questions about the initial data and adds value to it. The more questions we answer, the higher we move up the pyramid. In other words, the more we enrich our data with meaning and context, the more knowledge and insights we get out of it. At the top of the pyramid, we have turned the knowledge and insights into a learning experience that guides our actions.

The DIKW pyramid (Source: Soloviev, K., 2016). | Download ...



Information is the next building block of the DIKW Pyramid. This is data that has been “cleaned” of errors and further processed in a way that makes it easier to measure, visualize and analyze for a specific purpose.
Depending on this purpose, data processing can involve different operations such as combining different sets of data (aggregation), ensuring that the collected data is relevant and accurate (validation), etc. For example, we can organize our data in a way that exposes relationships between various seemingly disparate and disconnected data points. More specifically, we can analyze the Dow Jones index performance by creating a graph of data points for a particular period of time, based on the data at each day’s closing.
By asking relevant questions about ‘who’, ‘what’, ‘when’, ‘where’, etc., we can derive valuable information from the data and make it more useful for us.
But when we get to the question of  ‘how’, this is what makes the leap from information to Knowledge
“How” is the information, derived from the collected data, relevant to our goals? “How” are the pieces of this information connected to other pieces to add more meaning and value? And, maybe most importantly, “how” can we apply the information to achieve our goal?
When we don’t just view information as a description of collected facts, but also understand how to apply it to achieve our goals, we turn it into knowledge. This knowledge is often the edge that enterprises have over their competitors. As we uncover relationships that are not explicitly stated as information, we get deeper insights that take us higher up the DIKW pyramid.
But only when we use the knowledge and insights gained from the information to take proactive decisions, we can say that we have reached the final – ‘wisdom’ – step of the Knowledge Pyramid.

Wisdom

Wisdom is the top of the DIKW hierarchy and to get there, we must answer questions such as ‘why do something’ and ‘what is best’. In other words, wisdom is knowledge applied in action.
We can also say that, if data and information are like a look back to the past, knowledge and wisdom are associated with what we do now and what we want to achieve in the future.


Data

  1. information, often in the form of facts or figures obtained from experiments or surveys, used as a basis for making calculations or drawing conclusions
  2. information, for example, numbers, text, images, and sounds, in a form that is suitable for storage in or processing by a computer

Information

  1. definite knowledge acquired or supplied about something or somebody
  2. the collected facts and data about a particular subject
  3. a telephone service that supplies telephone numbers to the public on request.
  4. the communication of facts and knowledge
  5. computer data that has been organized and presented in a systematic fashion to clarify the underlying meaning
  6. a formal accusation of a crime brought by a prosecutor, as opposed to an indictment brought by a grand jury

Knowledge

  1. general awareness or possession of information, facts, ideas, truths, or principles
  2. clear awareness or explicit information, for example, of a situation or fact
  3. all the information, facts, truths, and principles learned throughout time
  4. familiarity or understanding gained through experience or study

Wisdom

  1. the knowledge and experience needed to make sensible decisions and judgments, or the good sense shown by the decisions and judgments made
  2. accumulated knowledge of life or in a particular sphere of activity that has been gained through experience
  3. an opinion that almost everyone seems to share or express
  4. ancient teachings or sayings
KM Pyramid Adaptation - DIKW pyramid - Wikipedia | Knowledge ...

How might driver-based forecasting—an approach that bases financial forecasts on operational drivers

Driver-Based Forecasting: Selecting the Right Drivers
How might driver-based forecasting—an approach that bases financial forecasts on operational drivers:

How might driver-based forecasting—an approach that bases financial forecasts on operational drivers—support your company's performance management needs? In the following Q&A—based on questions asked by participants during a live webcast on the topic—we discuss driver-based forecasting, the business models that lend themselves to this approach, and alternative planning and forecasting approaches.
We have a very low-tech environment and do most of our work in spreadsheets. Is implementing driver-based forecasting even possible for us?

Absolutely. Implementing driver-based forecasting is something you can set up in a spreadsheet environment for the purposes of scenario analysis with a very small, limited-use footprint. But to get even more value from driver-based forecasting you need an integrated platform where you can see the consensus forecast across the company, measure performance against drivers, and run a distributive process. It's possible to set something up through a spreadsheet, though an integrated tool can better execute this.
What are the biggest hurdles in implementing driver-based forecasting?

Getting organizational alignment around the entire framework tends to be the biggest hurdle. Everyone needs to understand their role in the driver-based framework—what pieces they own, what pieces they are accountable for. And you need to get alignment throughout the organization on which drivers and rates you'll be using. If you try doing this in isolation, people won't buy into it and you won't get all of the value out of it that you otherwise might.
How many driver levels do you typically see for forecasting revenue? For example: Revenue = Volume x Price, Volume = Category Growth x Share, Share = Base Volume Share + Incremental Volume Share, and so on.

We would suggest that it's even more complicated than just the levels. One of the challenges in the world of driver forecasting for revenue and volume is also your time window, because you can use different methods for the different time windows. The critical question is do you have good source data on your drivers. So if you have a good way to distinguish base and incremental, maybe through promoted or non-promoted or some other method by which you can isolate those drivers, then it is reasonable to bring them into the calculation. If you have an ability to look at distribution outlets, if you break down a channel, it can be reasonable to bring that in. One of the places where we see complexity emerge is where you expand it out too far into the theoretical and don't have any actual driver to bring into play. You may be able to add variables and levels of detail into your input, but they don’t necessarily give you an analytical benefit on the other side.
A big flaw I see in your model is the assumption that everything is variable in the short term. How do you account for fixed-cost elements that do not flex directly with volume?
This seems to be consistent with what we're saying. Some things lend themselves to driver-based forecasting, things that vary in cost and that flex with volume—typically volumes that are out of your control. For things that are more fixed, driver-based forecasting is probably not the best choice. Traditional or choice-based planning would be better, depending on whether you’re dealing with a discretionary or non-discretionary expense. You want to avoid artificially tying some driver to a fixed expense because you are trying to manage the overall cost envelope.
If I understand driver-based forecasting correctly, not only would you need a quality source for volume data but a very timely source so that leadership can manage appropriately. Do you agree?

Yes, the timeliness of the information is critical. Typically we see the variance analysis being done after month-end close and your volume data will be linked in to that process. When you bring in your actual data for financial values, you want to have an agreed-upon source to have these volumes brought in and then do that back calculation and do your three-way variance at that time. You can't be waiting three to six months down the line to get the data, because at that point you won’t be able to change course, if that's required.
Is there a preferred or optimal number of drivers to identify and manage on an ongoing basis?

It depends. There is no set or preferred number of drivers. In fact, we are suggesting that there is no clear effective practice that applies to everybody. It comes down to, first, what is your business model and, from that, the appropriate type of spending. And second, what are you trying to measure and what are the analytical decisions you want to make out of that. If you look at the three case studies we talked about, the Internet company didn’t have many drivers at all. The wireless telecom employed about 15. And the manufacturer had many more. So it depends on the business model and what analytical question you’re trying to answer.
How do you segregate the implications of one discretionary project from other projects or external forces when measuring results to hold a specific project accountable for their estimate?

To state your question another way: because so much can impact a driver or a rate, how can you effectively tie an investment or an initiative to one of those drivers or rates? This can be difficult. However, you can effectively apply it when you can tie your discretionary spending to a rate that is controlled internally, so you can filter out external, macro-economic factors. Suppose it is a rate that is controlled internally and you say, "You've been trending at $2,000 per unit against this particular driver, we're going to fund this investment but we expect to see that come down to $1,800." As long as everything is in the control of the organization and as long as six different investments aren't all hitting that same driver-rate calculation, you can tie a specific choice or investment with a particular driver over the long term. That said, it can be hard to do for a sales forecast because there are so many other factors. But thinking about it in a rigorous way and linking it into the planning model can be valuable process and make your business cases both more tangible and easier to understand.
Is an activity-based costing system a requirement for a successful driver-based forecasting system?

No. You don't need to have activity-based costing in place to do driver-based forecasting. They can exist independent of each other. If you have activity-based costing, it can accelerate your adoption of driver-based forecasting because you already have an initial sense of those activities which drive cost and the rates associated with those. If you have both activity-based costing and driver-based forecasting, it is very important to keep those two in synch so that you have consistency between them. Activity-based costing is likely going to exist at a level of detail that is deeper than driver-based forecasting. So they have to point in the same direction while you maintain the balance between the levels of detail.
How do you select drivers and rates when the rate changes based on the frequency with which the drivers occur? That is, how do you account for economies of scale that occur as driver frequency increases?

This brings up a couple of issues. One is planning process and you have to think about things like scales and changes to drivers within the timing of your planning process. Frequency is important. If a driver is adjusting at economies of scale every couple of months, you want a planning process that reflects that. If it changes more on an annual basis, same thing. Each company has to think about the rate of refresh — that is, how often do they execute their forecasting process or planning process and how do they deal with targeting and budgeting or forecasting and predicting. These questions are all related and it is hard to answer more definitively without knowing the frequency of your organization's process.
Breakeven analysis is very helpful as a predictor of profitability. How would you implement it? And what about the variable vs. fixed-cost challenge?

The driver-based model can be a very rigorous way to do a breakeven analysis for a new product or introduction. If you have the model figured out, you can plug in any sort of new growth and even isolate the other parts of the business that you don't want to model. Take the example of a new introduction. You can input that into a driver-based model and, for all the variable pieces, have that flow through and understand the full cost. You then need to understand what the fixed costs attributable to the new product introduction would be—and this is where the more traditional or choice-based planning would come in. This puts you at that point where you've got the breakeven analysis. The driver-based model can help blow out the variable piece of the breakeven analysis, but not the non-driver-based pieces.
How can product mix (e.g., returns) be factored into the volume/rate variance calculations?
You have to bring in product-level volume at a level of detail low enough to support the analysis. Bring in enough level of detail to make the analysis accurate. One might be product family and another might be stock keeping unit (SKU) level, depending on the importance of mix and margin.
You do have control over volume of returns as you drill down to why the returns occurred in the first place.

The enterprise may have some control—for instance, product development has control over how simple they make the product—but it’s still beyond the control of the person forecasting returns expense.
Is there an average time required for each planning type by specific industries?

Some people are more effective and efficient than others. And some industries that are more capital intensive may take longer. But we don’t see averages by industry.
Aren't those post-mortems often too late with capital expenditures?

We believe the process of doing a post mortem is more important than the timing of it.

Tadd MorgantiManaging Director | Deloitte Consulting LLP
tmorganti@deloitte.com

www.gmsisuccess.com

Monday, June 1, 2020

Internal audit failure leads to corporate governance failure



Event - ISO 9001:2015 Internal Auditor Training


Toshiba - a case of internal audit failure:


Toshiba, a 140-year-old pillar of Japan Inc, is caught up in the country's biggest accounting scandal since 2011. In 2011, Olympus Corp was embroiled in a scandal. In July 2015, Toshiba Corp president Hisao Tanaka and his two predecessors quit after investigators found that the company inflated earnings by at least $1.2 billion during the period 2009-2014. Toshiba is one of the early adopters of the corporate governance reforms initiated in Japan. The corporate governance structure met corporate governance standards. Time and again cases of corporate governance failures have provided evidence that good corporate governance structure does not necessarily lead to good corporate governance. Organisation culture is a critical determinant of the quality of corporate governance.
Some of the observations of the independent investigation committee of the company on internal audit demand discussion and debate.
The investigation committee observes, "According to the division of duties rules of Toshiba, the corporate audit division is in charge of auditing the corporate divisions, the companies, branch companies, and affiliated companies. However, in reality the corporate audit division mainly provided consultation services for the 'management' being carried out at each of the companies, etc (as part of the business operations audit), and it rarely conducted any services from the perspective of an accounting audit into whether or not an accounting treatment was appropriate."

The observations of the committee give the impression that the fault of the internal audit in Toshiba was that it focused on consultation service rather than assurance service. Should internal audit avoid providing consultation service? I do not think so. It was not the fault of the internal audit that it provided consultation service. The fault was that it did not pay attention to accounting audit.
In Toshiba, the top management used to set targets that are unachievable. There was excessive pressure from the top management to achieve those targets.
The variable pay is a significant portion of the total pay. The compensation of executive officers comprises a base compensation based on title and a role compensation based on work content. Forty per cent to 45 per cent of the role compensation is based on performance of the overall company or business department. 'Challenge' to achieve unachievable targets and performance-based pay provide enough motivation to manage earnings. Therefore, accounting audit should have been a focus area for internal audit.
Internal audit can function independently only if the audit committee is capable, independent and effective, and the internal auditor reports to the audit committee.
In Toshiba, the audit committee was neither capable nor independent. The three external members of the audit committee had no knowledge of finance and accounting. An ex-Chief Financial Officer (CFO), who was the CFO during the timeframe when accounting irregularities occurred, was the only whole time member of the audit committee. Therefore, the internal audit was not independent of the management. Earnings management had the tacit approval of the top management. Therefore, it is not surprising that accounting audit was excluded from the scope of internal audit. It is incorrect to infer that the accounting audit did not receive the attention of the internal audit because its focus was on providing consultation service.
Contemporary literature defines internal audit as 'assurance and consulting service'. The issue is of balancing between consultation service and assurance service. Problem arises when the internal auditor forgets that the internal audit is primarily an assurance function. The consultation service flows from the assurance service. Although, the primary objective of operation audit is to obtain assurance that the internal control that is installed to achieve operation objectives is adequate and operating effectively, the auditees look to the internal auditor for suggestions and consultancy. Such consultation service is a by-product of the assurance service. Auditees should not be denied the benefits of internal auditor's understanding of the industry and the business, and the challenges before the auditees in achieving operation objectives. Exclusion of consultation service from the scope of internal audit would result in sub-optimal utilisation of internal audit resources.
Organisation culture also determines the effectiveness of internal audit. The investigation committee observes, "A corporate culture existed at Toshiba whereby employees could not act contrary to the intent of their superiors". In such a culture an upright internal auditor cannot survive, particularly if he is not independent of the management. Perhaps, it is the reason that the internal audit in Toshiba had chosen the easy path of focusing on 'consultation service' only without reporting internal control weaknesses.
Internal auditor is the 'eyes and ears' and 'go-to man' of the audit committee. Therefore, internal audit failure leads to corporate governance failure.
Major Constituents and Requests from Internal Audit However, the ...
HomePage
www.gmsisuccess.com