Vous êtes ici :   Accueil » RSS - Isaca.org
 
Prévisualiser...  Imprimer...  Imprimer la page...
!Introduction
Technique
Outils
Base de connaissances
Visites

 1653566 visiteurs

 4 visiteurs en ligne

Contact

Notre site
griessenconsulting-Tag-Qrcode.png

info@griessenconsulting.ch

ch.linkedin.com/in/thierrygriessenCISA

Neuchâtel, Suisse


Mes coordonées
griessenconsulting-Tag-Vcard-OK.png

Crée votre Code

RSS - Isaca.org

ISACA Now: Posts

http://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/AllPosts.aspx


RSS feed for the Posts list.


Exploring COBIT 2019’s Value for Auditors  Voir?

Body:

Dirk SteuperaertCOBIT 2019 is a terrific resource for a wide range of business technology professionals. In ISACA's 19 September 2019 Professional Guidance webinar (free registration), “COBIT 2019 – Highly Relevant for Auditors,” we will focus on assurance professionals and the benefits they can obtain from COBIT 2019.

For that purpose, we will first quickly revisit the key COBIT 2019 concepts. We will then discuss the features of COBIT 2019 that are most relevant for auditors, such as the design factors and design guide, the governance and management objectives, and the new process capability scheme.

The design factors and design guide are intended to design a governance system, which prioritizes the 40 governance and management objectives and helps determine which focus area guidance is to be used. When assurance professionals have to develop their audit plans, they usually take a risk-based approach that considers enterprise objectives. This is exactly how the design factors can and should be used by assurance professionals to prioritize their audit plans. The goals cascade, risk scenarios, current IT issues and other elements are included as design factors.

The governance and management objectives, the process practices and activities are in essence language, concept and level of abstraction – equivalent to control objectives and control practices – and therefore can be used to develop audit programs and serve as suitable criteria for audit assignments. The process activities can also be used to develop detailed assurance steps.

COBIT 2019 contains a new process capability assessment scheme as part of its performance management guidance. The new scheme is based on CMMI and assigns capability levels to each process activity. The relevance for assurance professionals is twofold: based on the audit plan where governance and management objectives are prioritized, one can define target capability levels for the process component of each governance and management objective in scope of the assurance engagements, thus defining which process practices and activities will be in scope of the audit programs. Closely related, assurance professionals can use the capability levels to report process performance in their assurance engagements.

In addition to the above, assurance professionals should consider the non-process components of governance and management objectives when building their audit universes, plans and programs. COBIT 2019 indicates that not only are processes important governance components, but that organizational structures, culture and behaviors, information streams, skills and behaviors are important. For that reason, we encourage assurance professionals to consider them when conducting their engagements. The current COBIT 2019 performance management guidance does not yet fully support these other types of components – initial guidance for organizational structures and information quality is included in COBIT 2019, while guidance for other components is yet to come.

I look forward to this webinar further demonstrating the relevance of COBIT 2019 for assurance professionals and look forward to hearing your questions and suggestions for further guidance.

Category: COBIT-Governance of Enterprise IT
Published: 8/21/2019 3:00 PM

... / ... Lire la suite

(20/08/2019 @ 20:14)

In the Age of Cloud, Physical Security Still Matters  Voir?

Body:

Sourya BiswasAs a security consultant, I’ve had the opportunity to assess the security postures of clients of all shapes and sizes. These enterprises have ranged in sizes from a five-man startup where all security (and information technology) was being handled by a single individual to Fortune 500 companies with standalone security departments staffed by several people handling application security, vendor security, physical security, etc. This post is based primarily on my experiences with smaller clients.

Cloud computing has definitely revolutionized the way companies do business. Not only does it allow companies to focus on core competencies by outsourcing a major part of the underlying IT infrastructure (and associated problems), it also allows for the conversion of heavy capital expenditure into scalable operational expenses that can be turned up or down on demand. The latter is especially helpful for smaller companies that can now access technologies that before had only been available to enterprises with million-dollar IT budgets.

Information security is one area where this transformation has been really impactful. With the likes of Amazon, Google and Microsoft continually updating their cloud environments and making them more secure, a lot of those security responsibilities can be handed over to the cloud providers. And this includes physical security as well, with enterprises no longer having to secure their expensive data centers.

However, this doesn’t mean that the need for physical security in the operating environment disappears. I once had a client CEO say to me, and I’m quoting him word for word – “Everything is in the cloud; why do I need physical security?” I responded, “Let’s consider a hypothetical scenario: you’re logged into your AWS admin account on your laptop and step away for a cup of coffee; I walk in and walk away with your laptop. Will that be a security issue for you?” Considering that this client had multiple entry points to its office with no receptionist, security guard or badged entry, I consider this scenario realistic instead of just hypothetical.

I’ve visited client locations, signed in on a tablet with my name and who I’m supposed to meet, the person was notified, and I was subsequently escorted in. Note that at no point in this process was I required to verify who I am. Considering the IAAA (Identification, Authentication, Authorization, Auditing) model, I provided an Identity, but it was not Authenticated. In fact, if somebody else signed in with my name, they would have gained access to the facility considering the client contact was expecting me, or rather someone with my name, to show up around that time.

Let’s look at one more example. One of my clients, dealing with sensitive chemicals, had doors alarmed and CCTV-monitored. However, they left their windows unguarded, with the result that a drug addict broke in and stole several thousand dollars’ worth of material.

Smaller companies on smaller budgets obviously want to limit their spend on security. And with their production environments in the cloud, physical security of their office environments is the last thing on their minds. However, most of them have valuable physical assets, even if they don’t realize it, that could be secured by spending minimally. Here are a few recommendations:

  • Ensure you have only a single point of entry during normal operations. Having an alarmed emergency exit is, however, highly recommended.
  • Ensure that the above point of entry is covered by a camera. If live monitoring of the feed is too expensive, ensure that the time-stamped footage is stored offsite and retained for at least three months so that it can be reviewed in case of an incident.
  • Install glass breakage alarms on windows. Put in motion sensors.
  • In addition to alarms for forced entry, an alarm should sound for a door held open for more than 30 seconds. Train employees to prevent tailgating.
  • Require employees and contractors to wear identification badges visibly.
  • Verify identity of all guests and vendors before granting entry. Print out different-colored badges and encourage employees to speak up if anyone without a badge is on the premises.
  • Establish and enforce a clear screen, clean desk and clear whiteboard policy.
  • Put shredding bins adjacent to printers. Shred contents and any unattended papers at close of business.
  • Mandate the use of laptop locks.

Please note that the above recommendations are not expensive to implement. While some are process-based requiring employee training, most require minimal investment in off-the-shelf equipment. Of course, there are varying degrees of implementation – for example, contracting with a vendor to monitor and act on alarms will cost more than just sounding the alarm.

In summary, while physical security requirements have definitely been reduced by moving to the cloud, it would be foolhardy to believe they have disappeared. This relative neglect of physical security by certain companies, and more, is the subject of my upcoming session at the ISACA Geek Week in Atlanta.

What other physical security measures do you think companies often ignore but would be easy to implement? Respond in the comments below.

Category: Cloud Computing
Published: 8/19/2019 1:19 PM

... / ... Lire la suite

(15/08/2019 @ 20:52)

The Film Industry and IT Security  Voir?

Body:

Barbara WabwireFor those in the ISACA community who are fans of popular culture, you might have noticed in recent years that, in many cases, film and TV stars are beginning to look more like you and I, and less like the muscle men of our youths.

Movie and TV producers have long been interested in technology – from the times of single action heroes like the one-man army of John Rambo in “First Blood” and Arnold Schwarzenegger as a cyborg assassin in “Terminator,” the film industry has been at it. But as the work performed by IT security practitioners has become more central not only to all enterprises but to society as a whole, it has been interesting to see how that realization is filtering into the big (and small) screens.

Now having more fully embraced technology-savvy heroes, the film industry portrays IT security in action-packed, fast-paced, intense scenes where IT systems are breached by a few clicks, in a matter of seconds. The nerdy programmer super-heroes are largely depicted as introvert loners, and family members of IT security characters are prone to being kidnapped, taken hostage and other forms of trauma associated with the job.

In recent times, the internet, smartphones and mobile computing technology have taken center stage in movies, mirroring their rising prominence in our daily lives. The plot in many movies no longer leads to traditional showdowns in physical locations and instead are more likely to traverse multiple virtual locations, by use of drones and closed circuit television.

In the hit TV series “24,” Joel Surnow and Robert Cochran create a character of the indomitable Jack Bauer, who relies heavily on intel from the IT security team. The team, normally just one or two very intelligent people, support all counterterrorism operations, within a command operations center with multi-screens. The protagonists target each other’s operations center as part of the main strategic battle plan. Backup plans and fallback positions become the lifeline of the movies; you have to bring all these down to win – this is the new fictional reality.

Watching the Hatton Garden TV drama, a real-life story of how the Hatton Garden (underground) Safe Deposit Company is burgled by four elderly experienced thieves. As viewers, we worry if the aging thieves will survive hunger, severe incontinence, and worse still, heart attacks. And we must wonder what really happens to the IT security personnel in such a plot during such long weekends especially over the Easter weekend.

The tension and level of precision required of IT security professionals will vary from one sector to another. IT security personnel in a bank may stress over financial loss schemes orchestrated by internal and external players, while in a law firm, the concerns might center on a data leak that could compromise the privacy and confidentiality of the clients and violate lawyer-client confidentiality, paving the way to lawsuits, reputational risk and unfathomable damage. It amounts to a matter of trust, built painfully over a long period of time, that can be brought down in such a short time. And the business world is not so forgiving (see the Panama Papers expose).

The good news is that the daily routine of a “normal” IT security practitioner is relatively mundane by comparison and would not sell at the box office. Incidentally, how many IT security professionals would pay a premium ticket price to watch “us” do our job normally? The excitement and glamour injected into the roles by the script writers may be necessary to keep us glued to our seats, but taking some creative license has long been a hallmark of film and TV producers. That should not obscure the bigger picture here – the work that IT security professionals do for our enterprises can have heroic impact, as today’s consumers of cinema and television can increasingly attest.

Category: Security
Published: 8/16/2019 2:59 PM

... / ... Lire la suite

(15/08/2019 @ 15:38)

The Key Point Everyone is Missing About FaceApp  Voir?

Body:

Rebecca HeroldMuch has been written in recent weeks about the widely publicized privacy concerns with FaceApp, the app that uses artificial intelligence (AI) and augmented reality algorithms to take the images FaceApp users upload and allow the users to change them in a wide variety of ways. Just a few of the very real risks and concerns, which exist in most other apps beyond FaceApp, include:

  1. The nation-state connection (in this case, Russia)
  2. Unabashed, unlimited third-party sharing of your personal data
  3. Terms of use give unrestricted license for FaceApp to use your photos
  4. Your data will exist forever … in possibly many different places
  5. Data from the apps are being used for surveillance
  6. Data from the apps are used for profiling
  7. Apps are being used in ways that bully and/or inflict mental anguish
  8. Using the images for authentication to your accounts
  9. Your image can easily be used in deep fake videos
10. Look-alike apps are spreading malware

I could go on, but this should provide you with a good idea of the range of risks involved. Here is an important key point not within this list that has not been highlighted in the three or four dozen articles I’ve read on the topic: the FaceApp uproar highlights a long-time problem that is getting even worse in the way that privacy policies are written.

Evolution of Privacy Policies to Anti-Privacy Policies
I’ve been delivering privacy management classes since 2002. One of the topics I’ve emphasized is the importance of organizations actually doing what they say they will do in their website privacy policies, and not using misleading and vague language to actually limit the privacy protections and increase sharing with third parties. (Privacy policies are also often referenced as privacy notices; for the purposes of this article, consider them to be one and the same.) Organizations should not use privacy policies as a way to remove privacy protections from individuals. The US Federal Trade Commission (FTC) actually published a substantive report detailing these problems in May, 2000, entitled, “Privacy Online: Fair Information Practices in the Electronic Marketplace a Report to Congress.” The advice within this report is as valid today as it was back then; in many ways even more so.

A key point made within that FTC report emphasized the need to provide clarity for collections, uses and disclosures of, and choices related to, personal data. In particular there were three significant problem areas for the findings of the FTC’s research of website privacy policies that highlighted:

1) using of contradictory language;
2) offering unclear descriptions of how consumers can exercise choice; and
3) including statements indicating the possibility of changes to the policy at any time.

From 2000 to around 2010, I saw many websites that actually tried to address these issues. This was a fairly hot topic at information security and privacy conferences then, during which time I delivered keynotes and classes specific to addressing privacy within privacy policies, and then implementing the supporting controls within the organization to meet compliance with those privacy policies.

What happened around 2011 and after? A perfect anti-privacy storm involving increased use of search engine optimization (SEO) in ways that included communicating deceptive statements in websites and their privacy policies, and a huge jump in use by the general global population into a larger number of social media sites and blogging. This led to thousands of headlines over the past decade demonstrating increasing incorporation of non-friendly privacy practices. This was soon followed by apps that integrated with virtually every type of device, server, social media site and cloud service. To succeed in these areas, rank the highest in searches, gather the most personal data to subsequently monetize, get the most likes, and get the most online amplification through partnering and sharing data with as many other organizations as possible, marketing practices were used that incorporated creative (actually deceptive) modification of privacy policies. This in large part led to why so many of the current posted privacy policies tip toward being mostly anti-privacy in the manner in which they are written, often in ways that allow for as much data to be shared with as many other third parties as possible.

FaceApp’s Privacy Policy Problems
There are many vague and problematic areas within the FaceApp posted privacy policy; take a moment to read it. See what I mean? Let’s consider the “Parties with whom we may share your information” section in particular.

  • FaceApp can share unlimited types and amounts of your information (of all types) with “businesses that are legally part of the same group of companies that FaceApp is part of, or that become part of that group (“Affiliates”).” What businesses do those include? It doesn’t say in the FaceApp privacy policy.
  • So, digging deeper, according to the FaceApp Terms page, FaceApp’s “Designated Agent” is “Wireless Lab Ltd.” with an address in Saint-Petersburg, Russia. I did not find a privacy policy or terms of use on the Wireless Lab Ltd. page. It is interesting to see their email contact listed as info@faceapp.com. So, the businesses that are “legally part of the same group of companies that FaceApp is part of” is a mystery, based on what the websites communicate.
  • Moving on to others outside of their “group of companies,” FaceApp indicates that they “also may share your information as well as information from tools like cookies, log files, and device identifiers and location data, with third-party organizations that help us provide the Service to you (“Service Providers”). Our Service Providers will be given access to your information as is reasonably necessary to provide the Service under reasonable confidentiality terms.” So, do you now know who FaceApp is sharing data with? No. Do you know the specific data that is being shared to unknown others? No.
  • Moving on … they also state: “We may remove parts of data that can identify you and share anonymized data with other parties. We may also combine your information with other information in a way that it is no longer associated with you and share that aggregated information.” Does this give you assurance? No. Why? Because the way this is written they may be sending your personal data and so-called “anonymized data” to other parties, and that information may also be combined with other information that actually could re-identify you.

This section of the FaceApp privacy policy could be reworded to have basically the same meaning as: FaceApp may share any of your information with anyone else to use however they wish. Does this sound like a “privacy” policy to you? This type of non-privacy pledge is far too common on websites.

It is also worth noting that there was:

  • Just a single sentence (“We use commercially reasonable safeguards to help keep the information collected through the Service secure and take reasonable steps (such as requesting a unique password) to verify your identity before granting you access to your account.”) describing security, and a disclaimer of any responsibility for even securing your information and preventing others from getting access to your data.
  • No apparent information about how you can access and view all your data that they’ve collected or derived from what you provided to them.

Privacy Policy Problems Not Unique to FaceApp
If you reviewed your own organization’s privacy policy, would you identify similar problems? If you find that everything does look good from a privacy standpoint, is your organization fulfilling all the promises made in your posted privacy policy? In my experience doing privacy policy PIAs over the past couple of decades, roughly 90-95 percent of organizations are NOT in compliance with their own posted privacy policy. Every organization needs to realize that they are legally obligated to fulfill the promises they make within their own posted privacy policies, in addition to all their applicable laws and regulations.

It is a good practice for every IT audit, information security and privacy officer to put an audit of their posted privacy policy on their annual plan. If you don’t, you may be added to the growing list of organizations that have been slapped within increasingly larger FTC fines for not fulfilling privacy policy promises.

Category: Privacy
Published: 8/14/2019 2:59 PM

... / ... Lire la suite

(13/08/2019 @ 18:51)

Ethical Considerations of Artificial Intelligence  Voir?

Body:

Lisa VillanuevaHave you ever stopped to consider the ethical ramifications of the technology we rely on daily in our businesses and personal lives? The ethics of emerging technology, such as artificial intelligence (AI), was one of many compelling audit and technology topics addressed this week at the 2019 GRC conference.

In tackling this topic in a session titled “Angels or Demons, The Ethical Considerations of Artificial Intelligence,” session presenter Stephen Watson, director of tech risk assurance at AuditOne UK, first used examples to define the different forms of AI. For example, it was initially thought a computer could not beat a human at a game of chess or Go in the early stages of AI. Many were fascinated to find that indeed the computer could be programmed to achieve this goal. This is an example of Narrow or Weak AI where the computer can outperform humans at a specific task.

However, the major AI ethics problem and ensuing discussion largely focused on Artificial General Intelligence (AGI), the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. Some researchers refer to AGI as “strong AI” or “full AI,” and others reserve “strong AI” for machines capable of experiencing consciousness. The goal of AGI is to mimic the human ability to reason, which could, over time, result in the deployment of technology or robots that achieve a certain level of human consciousness. Questions were posed to the audience such as:

  • Should we make AI that looks and behaves like us and have rudimentary consciousness? Around half (49 percent) of the session attendees polled said no – not because they felt it was immoral or “playing God” but because it would give a false sense that machines are living creatures.
  • Can morality be programmed into AI since it is not objective, timeless or universal and can vary between cultures?
  • Would you want AI-enabled technologies to make life-and-death decision? Take the example of the self-driving car. Should the car be programmed to save the driver or the pedestrian in the unfortunate event of a collision?

In what scenarios would you want the AGI-enabled device to make the decision? Assurance professionals and others have been focused on gaining a better understanding of mechanics of AI and ISACA provides guidance on the role IT auditors can play in the governance and control of AI. However, it became apparent, after this thought-provoking GRC session, that considerations such as the following should also be seriously considered and discussed to ensure ethics and morals in the development and use of AI are not forgotten in the effort to harness this technology:

  • What rules should govern the programmer, and to what extent should the programmer’s experience and moral compass play into how the AGI responds to situations and people?
  • What biases are inherent in the data gathered and upon which the AGI is learning and making decisions?
  • How to evaluate the programs and associated algorithms once the machine has gained the ability of the human to comprehend, such as Blackbox AI?

The session intentionally stayed away from a deep discussion on the mechanics of the technology to foster the dialogue and thinking necessary to reflect on the ramifications, pro or con, of this growing technological capability, its future direction, and its impact on our business and social lives.

Over time, less and less technologies will be considered part of AI because their capabilities will be considered so much a part of our daily life that we won’t even think about it as AI. This was referred to as the “AI Effect.” Let’s not hesitate to ask the tough questions to ensure we are responsible and ethical in our development and use of this amazing technology as it continues to integrate into our daily routines to make our lives easier.

Share your thoughts on the ethics of AGI and other emerging tech in the comments below. We would love to hear from you and see you at the 2020 GRC conference, planned for 17-19 August 2020 in Austin, Texas, USA.

Category: Risk Management
Published: 8/15/2019 9:58 AM

... / ... Lire la suite

(14/08/2019 @ 23:04)

Dernière mise à jour : 22/08/2019 @ 20:11