Vous êtes ici :   Accueil » RSS - Isaca.org
 
Prévisualiser...  Imprimer...  Imprimer la page...
!Introduction
Technique
Outils
Base de connaissances
Visites

 1574536 visiteurs

 3 visiteurs en ligne

Contact

Notre site
griessenconsulting-Tag-Qrcode.png

info@griessenconsulting.ch

ch.linkedin.com/in/thierrygriessenCISA

Neuchâtel, Suisse


Mes coordonées
griessenconsulting-Tag-Vcard-OK.png

Crée votre Code

RSS - Isaca.org

ISACA Now: Posts

http://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/AllPosts.aspx


RSS feed for the Posts list.


Traits of a Successful Threat Hunter  Voir?

Body:

Roger O’FarrilThreat hunting is all about being proactive and looking for signs of compromise that other systems may have missed. As defenders, we want to cut down the time it takes to detect attackers. To accomplish this, we assume the bad guys have penetrated our defenses, and then proceed to look for traces that their activities have left behind.

Putting aside the technical details, it is extremely important to consider the person, or perhaps the team, who is doing the hunting. I describe a good threat hunter as a person with a wide skill set who has “been there and done that” in multiples areas of IT and security. There are four main dimensions that help shape a good hunter:

Curiosity
A threat hunter needs to be patient, highly motivated, and driven by a desire to know more. The person needs to start asking questions such as why in order to understand whatever activity may be under analysis. In order to be able to answer the why, the drive to go deep into the rabbit hole is essential.

Critical thinking
Being able to analyze and solve problems also is important. The hunter must always keep an open mind and be able to consider alternative solutions to the problem. Thinking like an attacker usually helps frame an investigation from a different angle and could be the key to uncovering evil within your systems.

Technical expertise
A wide array of technical knowledge is essential. A person who is an expert in network and knows very little about other disciplines such as forensics, applications, databases, etc., may not be able to see the big picture. Ideally, the hunter has cross-discipline knowledge and knows who to reach out to when more in-depth analysis is required.

Ability to connect the dots
This is one of the most important aspects. Many analysts struggle when presented with multiple sets of information and therefore are unable to connect the dots and put together the puzzle. An efficient hunter should be able to understand the data and its business context, perform the appropriate correlations, and reach conclusions.

Professionals with this sort of talent and skill are scarce. Remember that in many cases it makes perfect sense to develop hunting talent in-house. An employee who has worked in a few IT or information security disciplines who knows your business brings great value to the table. Look around and see who is up to the challenge.

Editor’s note: Roger O’Farril will be presenting further insights on this topic at ISACA’s CSX North America conference, to take place 15-17 October in Las Vegas, Nevada, USA.

Category: Security
Published: 8/21/2018 2:49 PM

... / ... Lire la suite

(20/08/2018 @ 18:44)

Cybersecurity is a Proactive Journey, Not a Destination  Voir?

Body:

Mike WonsCybersecurity continues to grab spotlight and mindshare as it pertains to computing and social trends.

The topic itself is broad and expansive, and the true impact of this segment of computing will be around for generations to come. For strong perspective on where the industry stands in its current state, ISACA’s State of Cybersecurity 2018 research is a must-read. This report provides a great assessment of what needs to happen in the cybersecurity field to move from reactive to proactive.

Challenges around cybersecurity are not new and have actually been around since the dawn of computing. However, it is now a topic that everyone talks about. It is a board topic, it is a public safety and livelihood topic, and it is a personal topic. Hitting this trifecta of impact has finally created the sense of urgency and the attention that is needed. Now, the key is that as an industry, as a country, and as a world of over 7 billion people, we need to effectively address these industry challenges to preserve the computing environment for the future.

Today, most cybersecurity efforts are focused on what is referred to as the “EMR” model of educate, monitor, and remediate. This approach is effective but is essentially like the game of “whack-a-mole,” where the core underlying risks and issues are never solved and keep popping up.

So, how does the governing of cybersecurity become proactive?

While EMR is essential, the core foundation of a more secure and trustworthy computing experience requires being more proactive. Proactive means ongoing, real-time, continuous self-testing and self-assessment, and a laser focus on education as it pertains to best practices. This, combined with a continued evolution on the new SaaS (security-as-a-service), will help mitigate and ensure more trust in the future. Still, it will be very difficult to solve all cybersecurity challenges due to the technical debt that exists and will exist for the immediate future.

Safe and secure computing can occur with a connected, comprehensive approach to security embedded in each of the leading digital disruption levers, from the Internet of Things, to conversational artificial intelligence, to blockchain and distributed ledger technology, to wearables and mobility. Industry focus, industry standards, close adherence to best practices, and the constant ability to randomize to protect digital identities is on the horizon and needs to continue to gain acceleration.

However, first and foremost, security best practices begin at the code level. As software engineers and as an innovation industry, we must make sure this is well-executed in each and every opportunity we have.

Author’s note: Mike Wons is the former CTO for the state of Illinois and is now serving as Chief Client Officer for Kansas City, Missouri-based PayIt. Mike can be reached at mwons@payitgov.com

Category: Audit-Assurance
Published: 8/17/2018 4:02 PM

... / ... Lire la suite

(16/08/2018 @ 23:24)

The AI Calculus – Where Do Ethics Factor In?  Voir?

Body:

While artificial intelligence and machine learning deployment are on the rise – and generating plenty of buzz along the way – organizations face difficult decisions about how, where and when to introduce AI.

In a session Tuesday at the 2018 GRC Conference in Nashville, Tennessee, USA, co-presenters Kirsten Lloyd and Josh Elliot laid out many of the ethical considerations that should be part of those deliberations.

The pair detailed several instances of high-profile AI events over the past decade that highlighted the need to give ethical components of AI deployment a high level of focus early in a product or service’s design, as opposed to risking unforeseen fallout. The examples included the development of a controversial algorithm that predicted higher rates of recidivism for black defendants in the judicial system and a Stanford University study exploring how often AI could determine a person’s sexual orientation based on photos of their faces.

Yet, for all of the questionable or even potentially malicious use cases of AI, Lloyd and Eliot highlighted an extensive list of powerfully compelling uses for AI, such as advancing new medical treatments, preventing cyber attacks, improving energy efficiency and increasing crop yields. Elliot, Booz Allen Hamilton’s director of artificial intelligence, noted that AI also may prove transformative in missing person crises, such as being able to swiftly locate missing children in AMBER Alert child abductions.

Whether the potential ethical implications of AI and machine learning outweigh the good that can be accomplished is very much a case-by-case judgment call, Elliot said, requiring a holistic evaluation of the possible outcomes through a risk management lens. Successful, ethical implementation of AI and machine learning also call for strong governance, with emphasis on benefits realization, risk optimization and resource optimization. Elliot and Lloyd said organizations should identify and engage key stakeholders in AI projects, including the creation of an ethical review board and a chief ethics officer. Some high-impact deployments might also require direct access to the C-Suite for input on risk considerations.

Elliot and Lloyd suggested that organizations consider the following questions when deciding how they might want to deploy AI and machine learning:

  1. What are our goals?
  2. How much risk are we willing to tolerate?
  3. What is the state of our data assets?
  4. What talent assets do we have?
  5. What are our values?

From a people talent standpoint, Elliot noted there is a serious shortage of professionals with the expertise to help enterprises effectively and securely implement AI and machine learning, causing many organizations to turn to the ranks of academia and research to fill in the personnel gaps. Lloyd, an AI strategist with Booz Allen Hamilton, acknowledged the workforce worries many harbor regarding the potential for AI and machine learning to displace large numbers of practitioners, but said that there will remain an enduring need for humans’ critical thinking skills, while machines continue to introduce process improvements in computational thinking.

Taking the long view, Elliot and Lloyd said AI and related disciplines have transitioned from their previous state of simple task execution to the current era of pattern recognition, with a future that will be reshaped by added capabilities of contextual reasoning. Elliot said many of today’s common uses, such as robotic process automation (RPA), are a mere “gateway drug” to more sophisticated technologies and applications that are being aggressively researched in Silicon Valley and beyond.

Category: ISACA
Published: 8/14/2018 2:39 PM

... / ... Lire la suite

(14/08/2018 @ 19:47)

Dernière mise à jour : 21/08/2018 @ 23:16