Vous êtes ici :   Accueil » RSS - Isaca.org
Prévisualiser...  Imprimer...  Imprimer la page...
Base de connaissances

 1666500 visiteurs

 2 visiteurs en ligne


Notre site



Neuchâtel, Suisse

Mes coordonées

Crée votre Code

RSS - Isaca.org

ISACA Now: Posts


RSS feed for the Posts list.

Artificial Intelligence: A Damocles Sword?  Voir?


Ravikumar Ramachandran“Artificial intelligence (AI) is proving to be a double-edged sword. While this can be said of most new technologies, both sides of the AI blade are far sharper, and neither is well understood.” - McKinsey Quarterly April 2019

In Greek mythology, the courtier Damocles was forced to sit beneath a sword suspended by a single hair to emphasize the instability of kings’ fortunes. Thus, the expression “the sword of Damocles” to mean an ever-present danger.

To use this idiom metaphorically, the users of artificial intelligence are like kings, due to the amazing and incredible functionalities brought in by this cutting-edge technology, but have a sword hanging on their head due to the perils of such highly scalable nature.

Artificial Intelligence: Meaning and Significance
To quote a formal definition, AI is “the art of creating machines that perform functions that require intelligence when performed by people.” - Kurzweil 1990.

However, intelligence is a more elusive concept. Though we know that humans require intelligence to solve their day-to day-problems, it is not clear that the techniques used by computers to solve those very problems endow them with human-like intelligence. In fact, computers use approaches that are very different from that of those used by humans. To illustrate, chess-playing computers used their immense speed to evaluate millions of positions per second – a strategy unable to be used by a human champion. Computers also have used specialized techniques to arrive at the consumer’s choice of products after sifting through huge data, identifying biometric, speech and facial recognition patterns.

Having said that, humans use their emotions to arrive at better decisions, which a computer (at least at present) is incapable of doing. Still, by developing sophisticated techniques, AI researchers are able to solve many important problems, and the solutions are used in many applications. In health and medical disciplines, AI is able to contribute and provide advanced solutions, by yielding groundbreaking insights.  AI techniques have already become ubiquitous and new applications are found every day. Per the April 2019 McKinsey Quarterly Report, AI could deliver additional global economic output of $13 trillion per year by 2030.

AI Risk and Potential Remediating Measures
Along with all the aforementioned positive outcomes, AI brings in innumerable risks of different types, potentially ranging from minor embarrassments to those highly catastrophic in nature, potentially endangering humankind. Let us enumerate and detail some of the risks known to be brought on by AI:

1. Lack of Complete Knowledge of the Intricacies of AI
AI is a recent phenomenon in the business world and many leaders are not knowledgeable about potential risk factors, even though they are forced to embrace it due to market and competitive pressures. The consequences could be anything from a minor mistake in decision-making to loss of customer data leading to privacy violations. The remediating measures are to involve and make everybody in the enterprise accountable and also to have board-level visibility in addition to having a thorough risk assessment done before embarking on AI initiatives.

2. Data Protection
The huge amount of data which are predominantly unstructured and are taken from various sources such as web, social media, mobile devices, sensors, and the Internet of Things is not easy to protect from loss or leakage, leading to regulatory violations. A strong end-to-end process needs to be built, with robust access control mechanisms and with a clear description of need-to know-privileges.

3. Technological Interfaces
AI mainly works on interfaces where many windows are available for data feeds coming from various sources. Care should be taken to ensure that the data flow, business logic and their associated algorithm are all accurate to avoid costly mishaps and embarrassment.

4. Security
This is a big issue, as evidenced by ISACA’s Digital Transformation Barometer, which shows that 60 percent of industry practitioners lack confidence in their organization’s ability to accurately assess the security of systems based on AI and machine learning. AI works on a huge scale of operations, so every precaution is to be taken to ensure the perimeter is secured. All aspects of logical, physical and application security needs to be looked into with more rigor than would otherwise be warranted.

5. Human Errors and Malicious Actions
Protect AI from humans and humans from AI. Insider threats like that of disgruntled employees injecting malware or wrong coding could spell disastrous outcomes or even lead to catastrophic events like the destruction of critical infrastructure.  Proper monitoring of activities, segregation of duties, and effective communication and counseling from top management are good suggested measures.

The deployment of AI may lead to discrimination and displacement within the workforce, and also could result in loss of lives for those who need to work with AI machines. This could be effectively remediated by upskilling and placing humans in vantage points of supply chains whereby they play an important role in sustaining customer relationships. To prevent workplace perils related to AI, rigorous checking of scripts and installation of fail-safe mechanisms, such as overriding the systems, will be helpful.

6. Proper Transfer of Knowledge and Atrophy Risk
The intelligence required by humans to solve a problem is transferred to machines through programs, so that it will resolve the same problem at a much larger scale with great speed and accuracy. Therefore, care should be taken so that no representative data or logic is left out or erroneously pronounced, lest it result in poor outcomes and decisions with losses to the business.

Because a skilled human will cede tasks to be executed by machines, such skills in humans could be eroded over time, resulting in atrophy. This could be partly remediated by keeping an up-to-date manual on such critical skills, including disaster recovery mechanisms.

Disclaimer: The views expressed in this article are of the author’s views and does not represent that of the organization or of the professional bodies to which he is associated. 

Category: Risk Management
Published: 12/13/2019 2:58 PM

... / ... Lire la suite

(12/12/2019 @ 20:07)

Who Am I? CRISC Equips Professionals and Organizations with a Valuable Identity  Voir?


Darren EllisAs a risk practitioner, have you ever tried to describe what you do for a living to a family member or a friend? If so, you’ve likely experienced their acquiescent and politely confused reaction as you articulate concepts like risk assessments, controls, tests, tolerance, appetite, key risk indicators, governance and a host of other tactics that are commonly executed as part of a practitioner’s day-to-day responsibilities. At the conclusion of your pride-filled intellectual description, you feel like you did a great job explaining what you do, when your conversational partner replies with, “Wow, that sounds awesome! So, what do you actually do?” Uncertain about how to respond, you begin to retrace your words only to realize that internally, you are asking yourself that very same question, combined now with an unclear perspective about your professional identity. You ponder, “What DO I do, and, who am I as a professional?”

Over the past 20 years, I’ve observed a plight all too common among risk practitioners wherein there is an enthusiastic rigor to schedule tasks, complete action plans, provide reporting/updates and declare that risks have been mitigated, when the most certain of questions is to follow: “So, what risk did we eliminate/reduce and how does that add value to our organization?” The enduring effort to complete tasks and assignments by the risk practitioner propagates and reinforces an illusion of risk management, because work, in the form of tasks and actions, was completed.

Reality strikes! In absence of utilizing an industry framework with principles, common taxonomy and structured objectives to clearly articulate how issues, losses and events are being prevented or reduced, the risk practitioner’s reputation, brand, self-esteem and identity progressively deteriorates. I’ve equipped hundreds of professionals with the training and tools provided by the CRISC certification and the outcome is nearly always the same, where CRISC training/certification served as a catalytic fuel energizing the risk practitioner’s identity while at the same time accelerating organizational maturity in the direction of a value-driven, risk intelligent culture. Here is how:

Individuals Identify Themselves as Competent and Confident Practitioners

  • A Strong Foundation: They learn the basics, they speak a common language and they use a proven methodological approach
  • A Community of the Like-Minded: They are part of a formally recognized community of professionals
  • A Distinction: They have made it through the studies and requirements necessary to obtain the CRISC distinction
  • Unlocking Strategic, Big-Picture Thinking: Their competencies become habits, freeing up their mind to think more broadly with intriguing inquisition
  • Clearly Articulating Value: Labeling/linking value and purpose effectively with executives, second/third line and examiners

Organizations Evolve to a Risk Intelligent, Value-Driven Ecosystem, Fueled by Trained Practitioners

  • Organic Neural Networking Within the Company: Team members formed their own think/brain tanks resulting in multiple innovations/enhancements within the first few months after CRISC training
  • Advancing and Benchmarking Industry Expertise: Team members developed external relationships within and across ISACA chapters to anticipate opportunities, prevent issues/events, and design better controls
  • Organic Employee Development Ripple Effect: Coaching took on a natural form, where CRISC candidates willingly encouraged, coached and mentored others

When you were asked about what you do for a living, it would have been so much easier to reply with something like: “I prevent bad things from happening to our customers/company. When I do my job well, my customers are safe and secure, and my company’s brand becomes stronger.”

With CRISC as an enabler, your employees will grow, develop and identify as professionals, and your organization will become enmeshed in a risk culture that is strong, resilient and organically intelligent.

Editor’s note: To find out more about the custom training program opportunities offered through ISACA, visit ISACA’s enterprise training page.

Category: Risk Management
Published: 12/10/2019 2:48 PM

... / ... Lire la suite

(09/12/2019 @ 23:09)

When Everything Old is New Again: How to Audit Artificial Intelligence for Racial Bias  Voir?


Ellen HuntYou may not know it, but artificial intelligence (AI) has already touched you in some meaningful way. Whether approving a loan, moving your resume along in the hiring process, or suggesting items for your online shopping chart, AI touches all of us – and in some cases, with much more serious consequences than just putting another item in your chart.

As this technology becomes more widespread, we are discovering that maybe it’s more human than we would like. AI algorithms have been found to have racial bias when used to make decisions about the allocation of health care, criminal sentencing and policing. In its speed and efficiency, AI has amplified and put a spotlight on the human biases that have been woven into and become part of the Black Box. For a deeper dive into AI and racial bias, read the books, Automating Inequality, Weapons of Math Destruction, and Algorithms of Oppression: How Search Engines Reinforce Racism.

As auditors, what is the best approach toward AI? Where and how can we bring the most value to our organizations as they design and implement the use of AI? Auditors need to be part of the design process to help establish clear governance principals and clearly documented processes for the use of AI by their organizations and its business partners. Because AI is not static, it is forever learning. Auditors need to take an agile approach to continuous auditing of the implementation and impact of AI to provide assurance and safeguards against racial bias.

Design and Governance: “In Approaching the New, Don’t Throw the Past Away”
In the United States, we like to think that the impact of slavery ended with the Civil War. It didn’t. We also want to believe that the landmark US Supreme Court case of Brown vs. Board of Education gave everyone access to the same education. It didn’t. Title VII of the Civil Rights Act of 1964 was passed to stop employment discrimination. It didn’t. Nonetheless, these “old” concepts of fairness and equality are still valid and are what is needed to be incorporated into the new AI; first, at the design and governance level; and then at the operational level. As the auditor, you should be asking what are the organization’s governance principles regarding the use of AI? A starting place may be to suggest that your organization adopt the OECD Principles on AI.

Do these principles apply only to the organization or also to its third parties and other business partners? How do these principals align with the organization’s values and code of conduct? What risks are associated with the use of AI that are not aligned with these principles? Conducting impact assessments to help create bias impact statements can help build out these principals. (See Model Behavior: Mitigating Bias in Public Sector Machine Learning Applications for eight specific questions that auditors can ask to help in the design phase to reduce bias in AI). Other resources to consider are After a Year of Tech Scandals, Our 10 Recommendations for AI, Algorithmic Bias Detection and Mitigation Best Practices and Policies to Reduce Consumer Harms, and
Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability.

Implementation and Impact: “Put it on Backwards When Forward Fails”
The greatest challenge with auditing AI is the very nature of AI itself – we don’t fully understand how the Black Box works. Further, the decisions it made yesterday may not be the same today. When looking at implementation and impact, a few frameworks have emerged (See ISACA's Auditing Artificial Intelligence and the IIA’s Artificial Intelligence Auditing Framework: Practical Applications, Part A & Part B.) To see how others have approached this challenge, looking at the numerous research projects in the public sector can be helpful. Regardless of the methodology used, because AI is always learning, an agile approach that provides for continuous auditing will be required to provide assurance against racial bias.

Editor’s note: For a forward-looking view of AI in the next decade, see ISACA’s Next Decade of Tech: Envisioning the 2020s research.

Category: Audit-Assurance
Published: 12/6/2019 2:24 PM

... / ... Lire la suite

(05/12/2019 @ 18:40)

Dernière mise à jour : 15/12/2019 @ 18:19