Artwork

Player FM - Internet Radio Done Right

61 subscribers

Checked 5y ago
Ditambah nine tahun yang lalu
Kandungan disediakan oleh O'Reilly Media. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh O'Reilly Media atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !
icon Daily Deals

Christie Terrill on building a high-caliber security program in 90 days

27:20
 
Kongsi
 

Manage episode 192111002 series 1211161
Kandungan disediakan oleh O'Reilly Media. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh O'Reilly Media atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

The O’Reilly Security Podcast: Aligning security objectives with business objectives, and how to approach evaluation and development of a security program.

In this episode of the Security Podcast, I talk with Christie Terrill, partner at Bishop Fox. We discuss the importance of educating businesses on the complexities of “being secure,” how to approach building a strong security program, and aligning security goals with the larger processes and goals of the business.

Here are some highlights:

Educating businesses on the complexities of “being secure”

This is a challenge that any CISO or director of security faces, whether they're new to an organization or building out an existing team. Building a security program is not just about the technology and the technical threats. It's how you're going to execute—finding the right people, having the right skill sets on the team, integrating efficiently with the other teams and the organization, and of course the technical aspects. There's a lot of things that have to come together, and one of the challenges about security is that companies like to look at security as its own little bubble. They’ll say, ‘we'll invest in security, we'll find people who are experts in security.’ But once you're in that bubble, you realize there's such a broad range of experience and expertise needed for so many different roles, that it's not just one size fits all. You can't use the word ‘security’ so simplistically. So, it can be challenging to educate businesses on everything that's involved when they just say a sentence like, ‘We want to be secure or more secure.’

Security can’t (and shouldn’t) interrupt the progress of other teams

The biggest constraint for implementing a better security program for most companies is finding a way to have security co-exist with other teams and processes within the organization. Security can’t interrupt the mission of the company or stop the progress and projects other IT teams already have in progress. You can’t just halt everything because security teams are coming in with their own agendas. Realistically, you have to rely on other teams and be able to work with them to make sure the security team could make progress either without them or alongside them.

Being able to work collaboratively and to support the teams with your security goals is absolutely critical. Typically, teams have their own projects and agendas, and if you can explain how security will actually help those in the end—they want to participate in your work as well but it's also integrated. You have to rely on each other.

How to approach security program strategy and planning

The assessment of a security program usually starts with a common triad of people, process, and technology. On the people side, there’s reevaluating the organizational structure—how many people should there be? What titles should they have? What should the reporting structure be? What should security take on itself versus what responsibility should we ask IT to do or let them keep doing?

Then, for processes, there can be a lot of pain points. When we develop processes, including the foundational security practices, we start with the ones that would solve immediate problems to show value and illustrate what a process can achieve. A process is not just a piece of paper or a checklist intended to make people's lives more difficult—a process should actually help people understand where something is at in the flow, and when something will get done. So, defining processes is really important to win over the business and the IT teams.

Then finally on the technology side, we try to emphasize that you should first evaluate the tools you already have. There may be nothing wrong with them. Look at how they're being used and if they're being optimized. Because investing, not just the upfront investment in security technology but the cost to replace that, perhaps consulting cost or churn cost of having to rip and replace, can be very high and can derail some of your other progress. To start, you should make sure you’re using every tool to its fullest capacity and fullest advantage before going down the path of considering buying new products.

  continue reading

43 episod

Artwork
iconKongsi
 
Manage episode 192111002 series 1211161
Kandungan disediakan oleh O'Reilly Media. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh O'Reilly Media atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

The O’Reilly Security Podcast: Aligning security objectives with business objectives, and how to approach evaluation and development of a security program.

In this episode of the Security Podcast, I talk with Christie Terrill, partner at Bishop Fox. We discuss the importance of educating businesses on the complexities of “being secure,” how to approach building a strong security program, and aligning security goals with the larger processes and goals of the business.

Here are some highlights:

Educating businesses on the complexities of “being secure”

This is a challenge that any CISO or director of security faces, whether they're new to an organization or building out an existing team. Building a security program is not just about the technology and the technical threats. It's how you're going to execute—finding the right people, having the right skill sets on the team, integrating efficiently with the other teams and the organization, and of course the technical aspects. There's a lot of things that have to come together, and one of the challenges about security is that companies like to look at security as its own little bubble. They’ll say, ‘we'll invest in security, we'll find people who are experts in security.’ But once you're in that bubble, you realize there's such a broad range of experience and expertise needed for so many different roles, that it's not just one size fits all. You can't use the word ‘security’ so simplistically. So, it can be challenging to educate businesses on everything that's involved when they just say a sentence like, ‘We want to be secure or more secure.’

Security can’t (and shouldn’t) interrupt the progress of other teams

The biggest constraint for implementing a better security program for most companies is finding a way to have security co-exist with other teams and processes within the organization. Security can’t interrupt the mission of the company or stop the progress and projects other IT teams already have in progress. You can’t just halt everything because security teams are coming in with their own agendas. Realistically, you have to rely on other teams and be able to work with them to make sure the security team could make progress either without them or alongside them.

Being able to work collaboratively and to support the teams with your security goals is absolutely critical. Typically, teams have their own projects and agendas, and if you can explain how security will actually help those in the end—they want to participate in your work as well but it's also integrated. You have to rely on each other.

How to approach security program strategy and planning

The assessment of a security program usually starts with a common triad of people, process, and technology. On the people side, there’s reevaluating the organizational structure—how many people should there be? What titles should they have? What should the reporting structure be? What should security take on itself versus what responsibility should we ask IT to do or let them keep doing?

Then, for processes, there can be a lot of pain points. When we develop processes, including the foundational security practices, we start with the ones that would solve immediate problems to show value and illustrate what a process can achieve. A process is not just a piece of paper or a checklist intended to make people's lives more difficult—a process should actually help people understand where something is at in the flow, and when something will get done. So, defining processes is really important to win over the business and the IT teams.

Then finally on the technology side, we try to emphasize that you should first evaluate the tools you already have. There may be nothing wrong with them. Look at how they're being used and if they're being optimized. Because investing, not just the upfront investment in security technology but the cost to replace that, perhaps consulting cost or churn cost of having to rip and replace, can be very high and can derail some of your other progress. To start, you should make sure you’re using every tool to its fullest capacity and fullest advantage before going down the path of considering buying new products.

  continue reading

43 episod

Tutti gli episodi

×
 
The O’Reilly Security Podcast: The objectives of agile application security and the vital need for organizations to build functional security culture. In this episode of the Security Podcast , I talk with Rich Smith , director of labs at Duo Labs, the research arm of Duo Security. We discuss the goals of agile application security, how to reframe success for security teams, and the short- and long-term implications of your security culture. Here are some highlights: Less-disruptive security through agile integration Better security is certainly one expected outcome of adopting agile application security processes, and I would say less-disruptive security would be an outcome as well. If I put my agile hat on, or could stand in the shoes of an agile developer, I would say they would have a lot of areas where they feel security gets in the way and doesn't actually really help them or make the product or the company more secure. Their perception is that security creates a lot of busy work, and I think this comes from that lack of understanding of agile from the security camp—and likewise of security from the agile camp. Along those lines, I would also say one of the key outcomes should be less security interference (where it's not necessary) in the agile process. The goal is to create more harmonious working relationships between these two groups. It would be a shame if the agile process was slowed down purely at the expense of security, and we weren't getting any tangible security benefits from that. Changing how security teams measure their success If you’re measuring the success of your security program by looking at what didn’t happen, the hard work your security team is doing may never really be apparent, and people may not understand the amount of hard work that went in to prevent bad things from happening. And obviously, that's difficult to quantify as well, from a management perspective. This often has had the unfortunate side effect that security teams measure themselves and measure their success from the perspective of bad things they stopped from happening. That may well be the case, but it's hard to measure, and it's actually quite a negative message. It can push security teams into the mindset that the way they can stop the bad things from happening is by trying to make sure as few things change as possible. Security teams should measure themselves on what they enable, and what they enable to happen securely. That's a much more tangible and positive way of measuring the worth of that security team and how effective they are. Any old security team, whether it's good or bad, can say no to everything. Good security teams understand the business, understand what the development team is trying to get done. It's really more about what they can enable the business to do securely, and that's going to require some novel problem solving. That's going to mean that you're not just going to take solutions off the shelf and throw them at every problem. Evaluating your organization’s security culture Every company already has a security culture. It may not be the one they want, but they already have one. You need to build a security culture that works well for the larger organization and is in keeping with the larger organization's culture. I think we absolutely can take control of that security culture, and I'll go further and say that we have to. Otherwise, you're just going to end up in a situation where you have a culture that is not serving your organization well. There’s a lot of questions you should be considering when evaluating your culture. What is your current security culture? How does the rest of the company think abut security? How does the rest of the company view your security team? Do people go out of their way to include the security team in conversations and decision-making, or do they prefer to chance it and hope that they don't notice and try to squeak under the radar? That says a lot about your security culture. If people aren't actively engaging with the subject matter experts, well, something's wrong there.…
 
O
O'Reilly Security Podcast - O'Reilly Media Podcast
O'Reilly Security Podcast - O'Reilly Media Podcast podcast artwork
 
The O’Reilly Security Podcast: Aligning security objectives with business objectives, and how to approach evaluation and development of a security program. In this episode of the Security Podcast , I talk with Christie Terrill , partner at Bishop Fox. We discuss the importance of educating businesses on the complexities of “being secure,” how to approach building a strong security program, and aligning security goals with the larger processes and goals of the business. Here are some highlights: Educating businesses on the complexities of “being secure” This is a challenge that any CISO or director of security faces, whether they're new to an organization or building out an existing team. Building a security program is not just about the technology and the technical threats. It's how you're going to execute—finding the right people, having the right skill sets on the team, integrating efficiently with the other teams and the organization, and of course the technical aspects. There's a lot of things that have to come together, and one of the challenges about security is that companies like to look at security as its own little bubble. They’ll say, ‘we'll invest in security, we'll find people who are experts in security.’ But once you're in that bubble, you realize there's such a broad range of experience and expertise needed for so many different roles, that it's not just one size fits all. You can't use the word ‘security’ so simplistically. So, it can be challenging to educate businesses on everything that's involved when they just say a sentence like, ‘We want to be secure or more secure.’ Security can’t (and shouldn’t) interrupt the progress of other teams The biggest constraint for implementing a better security program for most companies is finding a way to have security co-exist with other teams and processes within the organization. Security can’t interrupt the mission of the company or stop the progress and projects other IT teams already have in progress. You can’t just halt everything because security teams are coming in with their own agendas. Realistically, you have to rely on other teams and be able to work with them to make sure the security team could make progress either without them or alongside them. Being able to work collaboratively and to support the teams with your security goals is absolutely critical. Typically, teams have their own projects and agendas, and if you can explain how security will actually help those in the end—they want to participate in your work as well but it's also integrated. You have to rely on each other. How to approach security program strategy and planning The assessment of a security program usually starts with a common triad of people, process, and technology. On the people side, there’s reevaluating the organizational structure—how many people should there be? What titles should they have? What should the reporting structure be? What should security take on itself versus what responsibility should we ask IT to do or let them keep doing? Then, for processes, there can be a lot of pain points. When we develop processes, including the foundational security practices, we start with the ones that would solve immediate problems to show value and illustrate what a process can achieve. A process is not just a piece of paper or a checklist intended to make people's lives more difficult—a process should actually help people understand where something is at in the flow, and when something will get done. So, defining processes is really important to win over the business and the IT teams. Then finally on the technology side, we try to emphasize that you should first evaluate the tools you already have. There may be nothing wrong with them. Look at how they're being used and if they're being optimized. Because investing, not just the upfront investment in security technology but the cost to replace that, perhaps consulting cost or churn cost of having to rip and replace, can be very high and can derail some of your other progress. To start, you should make sure you’re using every tool to its fullest capacity and fullest advantage before going down the path of considering buying new products.…
 
O
O'Reilly Security Podcast - O'Reilly Media Podcast
O'Reilly Security Podcast - O'Reilly Media Podcast podcast artwork
 
The O’Reilly Security Podcast: Recruiting and building future open source maintainers, how speed and security aren’t mutually exclusive, and identifying and defining first principles for security. In this episode of the Security Podcast , O’Reilly’s Mac Slocum talks with Susan Sons , senior systems analyst for the Center for Applied Cybersecurity Research (CACR) at Indiana University. They discuss how she initially got involved with fixing the open source Network Time Protocol (NTP) project , recruiting and training new people to help maintain open source projects like NTP, and how security needn’t be an impediment to organizations moving quickly. Here are some highlights: Recruiting to save the internet The terrifying thing about infrastructure software in particular is that paying your internet service provider (ISP) bill covers all the cabling that runs to your home or business; the people who work at the ISP; and their routing equipment, power, billing systems, and marketing—but it doesn't cover the software that makes the internet work. That is maintained almost entirely by aging volunteers, and we're not seeing a new cadre of people stepping up and taking over their projects. What we're seeing is ones and twos of volunteers who are hanging on but burning out while trying to do this in addition to a full-time job, or are doing it instead of a full-time job and should be retired, or are retired. It's just not meeting the current needs. Early- and mid-career programmers and sysadmins say, 'I'm going to go work on this really cool user application. It feels safer.' They don't work on the core of the internet. Ensuring the future of the internet and infrastructure software is partly a matter of funding (in my O’Reilly Security talk on saving time , I talk about a few places you can donate to help with that, including ICEI and CACR ), and partly a matter of recruiting people who are already out there in the programming world to get interested in systems programming and learn to work on this. I'm willing to teach. I have an Internet Relay Chat (IRC) channel set up on freenode called #newguard. Anyone can show up and get mentorship, but we desperately need more people. Building for both speed and security Security only slows you down when you have an insecure product, not enough developer resources, not enough testing infrastructure, not enough infrastructure to roll out patches quickly and safely. When your programming teams have the infrastructure and scaffolding around software they need to roll out patches easily and quickly—when security has been built into your software architecture instead of plastered on afterward, and the architecture itself is compartmented and fault tolerant and has minimization taken into account—security doesn't hinder you. But before you build, you have to take a breath and say, 'How am I going to build this in?' or 'I’m going to stop doing what I’m doing, and refactor what I should have built in from the beginning.' That takes a long view rather than short-term planning. Identifying and defining first principles for security I worked with colleagues at the Indiana University Center for Applied Cybersecurity Research (CACR) to develop the Information Security Practice Principles (ISPP). In essence, the ISPP project identifies and defines seven rules that create a mental model for securing any technology. Seven may sound like too few, but it dates back to rules of warfare and Sun Tzu and how to protect things and how to make things resilient. I do a lot of work from first principles. Part of my role is that I’m called in when we don't know what we have yet or when something's a disaster and we need to triage. Best practice lists come from somewhere, but why do we teach people just to check off best practice lists without questioning them? If we teach more people to work from first principles, we can have more mature discussions, we can actually get our C-suite or other leadership involved because we can talk in concepts that they understand. Additionally, we can make decisions about things that don't have best practice checklists.…
 
The O’Reilly Security Podcast: The growing role of data science in security, data literacy outside the technical realm, and practical applications of machine learning. In this episode of the Security Podcast , I talk with Charles Givre , senior lead data scientist at Orbital Insight. We discuss how data science skills are increasingly important for security professionals, the critical role of data scientists in making the results of their work accessible to even nontechnical stakeholders, and using machine learning as a dynamic filter for vast amounts of data. Here are some highlights: Data science skills are becoming requisite for security teams I expect to see two trends in the next few years. First, I think we’re going to see tools becoming much smarter. Not to suggest they're not smart now, but I think we're going to see the builders of security-related tools integrating more and more data science. We're already seeing a lot of tools claiming they use machine learning to do anomaly detection and similar tasks. We're going to see even more of that. Secondly, I think rudimentary data science skills are going to become a core competency for security professionals. Considering, I expect we are going to increasingly see security jobs requiring some understanding of core data science principles like machine learning, big data, and data visualization. Of course, I still think there will be a need for data scientists. Data scientists are going to continue to do important work in security, but I also think basic data science skills are going to proliferate throughout the overall security community. Data literacy for all I'm hopeful we're going to start seeing more growth in data literacy training for management and nontechnical staff, because it's going to be increasingly important. In the years to come, management and executive-level professionals will need to understand the basics—maybe not a technical understanding, but at least a conceptual understanding of what these techniques can accomplish. Along those lines, one of the core competencies of a data scientist is, or at least arguably should be, communication skills. I'd include data visualization in that skillset. You can use the most advanced modeling techniques and produce the most amazing results, but if you can't communicate that in an effective manner to a stakeholder, then your work is not likely to be accepted, adopted, or trusted. As such, making results accessible is really a vital component of a data scientist’s work. Machine learning as a dynamic filter for security data Machine learning and deep learning have definitely become the buzzwords du jour of the security world, but they genuinely bring a lot of value to the table. In my opinion, the biggest value machine learning brings to the table is the ability to learn and identify new patterns and behaviors that represent threats. When I teach machine learning classes, one of the examples I use is domain-generating algorithm detection. You can do this with a whitelist or a blacklist, but neither one of these is going to be the most effective approach. There's been a lot of success in using machine learning to identify this, allowing you to then mitigate the threat. A colleague of mine, Austin Taylor, gave a presentation and wrote a blog post about this as well—about how machine learning fits in the overall schema. He views data science in security as being most useful in building a very dynamic filter for your data. If you imagine an inverted triangle, you begin examining tons and tons of data, but you can use machine learning to filter out the vast majority of it. From there, a human might still have to look at the remaining portion. By applying several layers of machine learning to that initial ingested data, you can efficiently filter out the stuff that's not of interest.…
 
O
O'Reilly Security Podcast - O'Reilly Media Podcast
O'Reilly Security Podcast - O'Reilly Media Podcast podcast artwork
 
The O’Reilly Security Podcast: The multidiscliplinary nature of defense, making security accessible, and how the current perception of security professionals hinders innovation and hiring. In this episode of the Security Podcast , I talk with Andrea Limbago , chief social scientist at Endgame. We discuss how the misperception of security as a computer science skillset ultimately restricts innovation, the need to make security easier and accessible for everyone, and how current branding of security can discourage newcomers. Here are some highlights: The multidisciplinary nature of defense The general perception is that security is a skillset in the computer science domain. As I've been in the industry for several years, I've noticed more and more the need for different disciplines, outside of computer science, within security. For example, we need data scientists to help handle the vast amount of security data and guide the daily collection and analysis of data. Another example is the need to craft accessible user interfaces for security. So many of the existing security tools or best practices just aren't user friendly. Of course, you also need that computer science expertise as well--from the more traditional hackers to defenders. All that insight can come together to help inform a more resilient defense. Beyond that, there’s the consideration of the impact of economics and psychology. This is especially relevant when you think about insider threat. It's really something I wish more people would think about in a broader perspective, and I think that would actually help attract a lot more people into the industry as well, which we desperately need right now. Making security accessible and easier for all We need to do a better job of informing the general public about security. Those of us in the security field see information on how to secure our accounts and devices all the time, but I consistently come across people outside of our industry who still don't understand things like two-factor authentication, or why that would be helpful for them. These are very smart people. Part of the challenge is we, as an industry, haven't done a phenomenal job branching out and talking in more common language about the various aspects and steps people can take. People know they need to be secure, but they really don't know what the key steps are. This month for National Cybersecurity Awareness Month , there are going to be hundreds of ‘Here are 10 things you need to do to be secure’-style articles, but these messages are not always making their way to the actual target audience. It needs to become more of a mainstream concern, and it needs to be made easier for people to secure their accounts and devices. We talk a lot about the convenience versus security trade-off, and for a lot of people, convenience is still what matters most. It's really hard to switch the incentive structure for people to help them understand that taking all these steps toward better security truly is worth the investment of their time. For us, as an industry, if we make it as easy as possible, I think that will help. Security has a branding problem We need to do a better job of making security appealing to a broader audience. When I talk to students and ask them what they think about security and cyber security and hacking, they immediately think of a guy in a dark hoodie. And that alone is limiting people from getting excited about entering the workforce. Obviously, the discipline and the industry is much broader than that. We, as an industry, need to rework our marketing campaigns to show other kinds of stock photos. If we can do that, we can start getting more and more diverse people interested and coming into the industry. By attracting the interest of a broader range of students and having them bring their diverse skillsets in from other disciplines, we can strengthen our defenses and increase innovation. If we change the branding of security and the perception of what it means to be a security professional, we can help fill the pipeline, which is one of our most crucial missions as an industry at this time.…
 
The O’Reilly Security Podcast: Why tools aren’t always the answer to security problems and the oft overlooked impact of user frustration and fatigue. In this episode of the Security Podcast , I talk with Window Snyder , chief security officer at Fastly. We discuss the fact that many core security best practices aren’t easy to achieve with tools, the importance of not discounting user fatigue and frustration, and the need to personalize security tools and processes to your individual environment. Here are some highlights: Many security tasks require a hands-on approach There are a lot of things that we, as an industry, have known how to do for a very long time but that are still expensive and difficult to achieve. This includes things like staying up-to-date with patching or moving to more sophisticated authorization models. These types of tasks generally require significant work, and they might also impose a workflow obstacle to users that's expensive. Another proven and measurable way to improve security is to review deployments and identify features or systems that are no longer serving their original purpose but are still enabled. If they're still enabled but no longer serving a purpose, they may may leave you unneccessarily open to vulnerabilities. In these cases, a plan to reduce attack surface by eliminating these features or systems is work that humans generally must do, and it actually does increase the security of your environments in a measurable way because now your attack surface is smaller. These aren’t the sorts of activities that you can throw a tool in front of and feel like you've checked a box. Frustration and fatigue are often overlooked considerations Realistically, it's challenging for most organizations to achieve all the things we know we need to do as an industry. Getting the patch window down to a smaller and smaller size is critical for most organizations, but you have to consider this within the context of your organization and its goals. For example, if you’re patching a sensitive system, you may have to balance the need to reduce the patch window with the stability of the production environment. Or if a patch requires you to update users’ work stations, the frustration of having to update their systems and having their machines rebooted might derail productivity. It's an organizational leap to say that it's more important to address potential security problems when you are dealing with the very real obstacle of user frustration or security exhaustion. This is complicated by the fact that there's an infinite parade of things we need to be concerned about. More is not commensurate to better It’s reasonable to try to scale security engineering by finding tools you can leverage to help address more of the work that your organization needs. For example, an application security engineer might leverage a source analysis tool. Source analysis tools help scale the number of applications that you can assess in the same amount of time, and that’s reasonable because we all want to make better use of everyone's time. But without someone tuning the source analysis tool to your specific environment, you might end up with a source analysis tool that finds a lot of issues, creates a lot of flags, and then is overwhelming for the engineering team to try to address because of the sheer amount of data. They might conceivably look at the results and realize that the tool doesn't understand the mitigations that are already in place or the reasons these issues aren't going to be a problem and may create a situation where they disregard what the tool identifies. Once fatigue sets in, the tool may well be identifying real problems, but the value the tool contributes ends up being lost.…
 
The O’Reilly Security Podcast: Shifting secure code responsibility to developers, building secure software quickly, and the importance of changing processes. In this episode of the Security Podcast , I talk with Chris Wysopal , co-founder and CTO of Veracode. We discuss the increasing role of developers in building secure software, maintaining development speed while injecting security testing, and helping developers identify when they need to contact the security team for help. Here are some highlights: The challenges of securing enduring vs. new software One of the big challenges in securing software is that it’s most often built, maintained, and upgraded over many years. Think of online banking software for a financial services company. They probably started building that 15 years ago, and it's probably gone through two or three major changes, but the tooling and the language and the libraries, and all the things that they're using are all built from the original code. Fitting security into that style of software development presents challenges because they're not used to the newer tool sets and the newer ways of doing things. It's actually sometimes easier to integrate security into a newer software. Even though they're moving faster, it's easier to integrate into some of the newer development toolchains. Changing processes to enable small batch testing and fixing There are parallels between where we are with security now and where performance was at the beginning of the Agile movement. With Agile, the thought was, ‘We're going to go fast, but one of the ways we're going to maintain quality is we're going to require unit tests written by every developer for every piece of functIonality they do, and that these automated unit tests will run on every build and every code change.’ By changing the way you do things, from a manual backend weighted full system test to smaller batch incremental tests of pieces of functionality, you're able to speed up the development process, without sacrificing quality. That's a change in process. To have a high performing application, you didn't necessarily need to spend more time building it. You needed better intelligence—so, APM technology put into production to understand performance issues better and more quickly allowed teams to still go fast and not have performance bottlenecks. With security, we're going to see the same thing. There can be some additional technology put into play, but the other key factor is changing your process. We call this ‘shifting left,’ which means: find the security defect as quickly as possible or as early as possible in the development lifecycle so that it's cheaper and quicker to fix. For example, if a developer writes a cross-site scripting error as they're coding in JavaScript, and they're able to detect that within minutes of creating that flaw, it will likely only require minutes or seconds to fix. Whereas if that flaw is discovered two weeks later by a manual tester, that's going to be then entered into a defect tracking system. It's going to be triaged. It's going to be put into someone's bug queue. With the delay in identification, it will have to be researched in its original context and will slow down development. Now, you're potentially talking hours of time to fix the same flaw. Maybe a scale of 10 or 100 times more time is taken. Shifting left is a way of thinking about, ‘How do I do small batch testing and fixing?’ That's a process change that enables you to keep going fast and be secure. Helping developers identify when they need to call for security help We need to teach developers about application security to enable them to identify when there’s a problem and when they don't know enough to solve it themselves. One of the problems with application security is that developers often don't know enough to recognize when they need to call in an expert. For example, when an architect is building a structure and knows there’s a problem with the engineering of a component, the architect knows to call in a structural engineer to augment their expertise. We need to have the same dynamic with software developers. They're experts in their field, and they need to know a lot about security. They also need to know when they require help with threat modeling or to perform a manual code review on a really critical piece of code, like account recovery mechanism. We need to shift more security expertise into the development organization, but part of that is also helping developers know when to call out to the security team. That's also a way we can help the challenge of hiring security experts, because they're hard to find.…
 
O
O'Reilly Security Podcast - O'Reilly Media Podcast
O'Reilly Security Podcast - O'Reilly Media Podcast podcast artwork
 
The O’Reilly Security Podcast: The open-ended nature of incident response, and how threat intelligence and incident response are two pieces of one process. In this episode of the Security Podcast , I talk with Scott Roberts , security operations manager at GitHub. We discuss threat intelligence, incident response, and how they interrelate. Here are some highlights: Threat intelligence should affect how you identify and respond to incidents Threat intelligence doesn't exist on its own. It really can't. If you're collecting threat intelligence without acting upon it, it serves no purpose. Threat intelligence makes sense when you integrate it with the traditional incident response capability. Intelligence should affect how you identify and respond to incidences. The idea is that these aren't really two separate things, they're simply two pieces of one process. If you're doing incident response without using threat intelligence then you’ll keep getting hit with the same attack time after time. Now, by the same token, if you have threat intelligence without incident response, you're just shouting into the void. No one is taking the information and making it actionable. The open-ended nature of incident response It’s key to think about incidents as ongoing. There are very few times when an attacker will launch an attack once, be rebuffed, and simply go away. In almost all cases, there's a continuous process. I've worked in organizations where we would do the work to identify an incident and promptly forget about it. Then three weeks later, we would suddenly stumble across the exact same thing. Ultimately, intelligence-driven incident response happens in those intervening three weeks. What are you doing in that time between incidents from the same actor, with the same target? And how are you using what you've learned to prepare for the next time? Regardless of the size of your organization, you can implement processes to better your defenses after each incident. It can be as simple as keeping good notes, thinking about root causes, and considering what could better protect your organization from the same or similar attackers in the future. Basically, instead of marking an incident closed as soon as you’ve dealt with the immediate threat, think beyond the current incident and try to understand what the attack is going to look like the next time. Even if you can't identify the next iteration, you don't want to get hit by the same thing again. As your team expands and matures, there are opportunities for more specialized types of analysis and processes, but intelligence-driven incident response is something you can adopt regardless of your size or maturity. Why more threat intelligence data is not always better When a team gets started with threat intelligence, their first impulse is to try collecting the biggest data set imaginable with the idea that there's going to be a magic way to pick out the needle in the haystack. While I understand why that may seem like a logical place to start, that's often a very abstract and time-intensive approach. When I look at intelligence programs, I first want to know what the team is doing with their own investigation data. The mass appeal of gathering a ton of information is all about trying to figure out which IP is most important to me or which piece of information I need to find. Often, I find that information is already available in a team's incident response database or their incident management platform. I think the first place you should always look is internally. If you want to know what threats are going to be important to an organization, look at the ones you've already experienced. Once you’ve got all those figured out, then go look at what else is out there. The first place to be effective and truly know that you're doing relevant work for your organization's defense in the future is to look at your past.…
 
O
O'Reilly Security Podcast - O'Reilly Media Podcast
O'Reilly Security Podcast - O'Reilly Media Podcast podcast artwork
 
The O'Reilly Security Podcast: The role of community, the proliferation of BSides and other InfoSec community events, and celebrating our heroes and heroines. In this episode of the Security Podcast , I talk with Jack Daniel , co-founder of Security Bsides. We discuss how each of us (and the industry as a whole) benefits from community building, the importance of historical context, and the inimitable Becky Bace. Here are some highlights: The indispensable role and benefit of community building As I grew in my career, I learned things that I shared. I felt that if you're going to teach me, then as soon as I know something new, I'll teach you. I began to realize that the more I share with people, the more they're willing to share with me. This exchange of information built trust and confidence. When you build that trust, people are more likely to share information beyond what they may feel comfortable saying in a public forum and that may help you solve problems in your own environment. I realized these opportunities to connect and share information were tremendously beneficial not only to me, but to everyone participating. They build professional and personal relationships, which I've become addicted to. It’s a fantastic resource to be part of a community, and the more effort you put into it, the more you get back. Security is such an amazing community. We’re facing incredible challenges. We need to share ideas if we're going to pull it off. Extolling InfoSec history with the Shoulders of InfoSec I realized a few years ago that despite the fact I was friends with a lot of trailblazers in the security space, I didn't have much perspective on the history of InfoSec or hacking. I recognized that I have friends like Gene Spafford and the late Becky Bace who have seen or participated in the foundation of our industry and know many of the stories of our community. I decided to do a presentation a few years ago at DerbyCon that introduced the early contributors and pioneers who made our industry what it is today and built the early foundation for our practices. I quickly realized that cataloging this history wasn't a single presentation, but a larger undertaking. This is why I created the Shoulders of InfoSec program , which shines a light on the contributions of those whose shoulders we stand on. The idea is to make it easy to find a quick history of information security and, to a lesser extent, the hacker culture. As Newton actually paraphrased, if he has seen farther, it's by standing on the shoulders of giants, and we all stand on the shoulders of giants. The inimitable Becky Bace Becky was known as the den mother of IDS, for her work fostering and supporting intrusion detection and network behavior analysis. But even beyond her amazing technical expertise and contributions, Becky gave the best hugs in the world. She was just an amazingly warm, friendly, and welcoming person. One of the things that always struck me about Becky is the number of people she mentored through the years, and the number of people whose careers got a start or a boost because of Becky. She was just pure awesome. She would go out of her way to help people, and the more they needed help, the more likely she would be to find them and help them. She came from southern Alabama, and when she came north to the D.C. area, her dad said, ‘You can go up north and get a job and marry a Yankee, but when you're done doing that, I want you to come home because, remember, we need help down here.’ For those who don't know, when she left her consulting practice, she went to the University of South Alabama—not even University of Alabama, but the University of South Alabama—and set up a cyber security program. She was bringing cyber security education to people who otherwise wouldn't get it and she built a fantastic program. She did it because she promised her dad she would.…
 
O
O'Reilly Security Podcast - O'Reilly Media Podcast
O'Reilly Security Podcast - O'Reilly Media Podcast podcast artwork
 
The O’Reilly Security Podcast: The prevalence of convenient data, first steps toward a security data analytics program, and effective data visualization. In this episode of the Security Podcast , Courtney Nash, former chair of O’Reilly Security conference, talks with Jay Jacobs , senior data scientist at BitSight . We discuss the constraints of convenient data, the simple first steps toward building a basic security data analytics program, and effective data visualizations. Here are some highlights: The limitations of convenient data In security, we often see the use of convenient data—essentially, the data we can get our hands on. You see that sometimes in medicine where people studying a specific disease will grab the patients with that disease in the hospital they work in. There's some benefits to doing that. Obviously, the data collection is easy because you get the data that’s readily available. At the same time, there's limitations. The data may not be representative of the larger population. Using multiple studies combats the limitations of convenient data. For example, when I was working on the Verizon Data Breach Investigations Report, we tried to tackle that by diversifying the sources of data. Each individual contributor had their own convenient sample. They're getting the data they can access. Each contributing organization had their own biases and limitations, problems, and areas of focus. There are biases and inherent problems with each data set, but when you combine them, that's when you start to see the strength because now all of these biases start to level out and even off a little bit. There are still problems, including representativeness, but this is one of the ways to combat it. The simple first steps to building a data analysis program The first step is to just count and collect everything. As I work with organizations on their data, I see a challenge where people will try to collect only the right things, or the things that they think are going to be helpful. When they only collect things they originally think will be handy, they often miss some things that are ultimately really helpful to analysis. Just start out counting and collecting everything. Even things you don't think are countable or collectible. At one point, a lot of people didn't think that you could put a breach, which is a series of events, into a format that could be conducive to analysis. I think we've got some areas we could focus on like pen testing and red team activity. I think these are areas just right for a good data collection effort. If you're collecting all this data, you can do some simple counting and comparison. ‘This month I saw X number and this month I saw Y.’ As you compare, you can see whether there’s change, and then discuss that change. Is it significant, and do we care? The other thing: a lot of people capture metrics and don’t actually ask the question do we care if it goes up or down? That's a problem. Considerations for effective data visualization Data visualization is a very popular field right now. It's not just concerned with why pie charts might be bad—there's a lot more nuance and detail. One important factor to consider in data visualization, just like communicating in any other medium, is your audience. You have to be able to understand your audience, their motivations, and experience levels. There are three things you should evaluate when building a data visualization. First, you start with your original research question. Then you figure out how the data collected answers that question. Then once you start to develop a data visualization, you try to ask yourself does that visualization match what the data says, and does it match and answer the original question being asked? Trying to think of those three parts of that equation, that they all have to line up and explain each other, I think that helps people communicate better.…
 
The O’Reilly Security Podcast: Why legal responses to bug reports are an unhealthy reflex, thinking through first steps for a vulnerability disclosure policy, and the value of learning by doing. In this episode, O’Reilly’s Courtney Nash talks with Katie Moussouris , founder and CEO of Luta Security. They discuss why many organizations have a knee-jerk legal response to a bug report (and why your organization shouldn’t), the first steps organizations should take in formulating a vulnerability disclosure program, and how learning through experience and sharing knowledge benefits all. Here are some highlights: Why legal responses to bug reports are a faulty reflex The first reaction to a researcher reporting a bug for many organizations is to immediately respond with legal action. These organizations aren’t considering that their lawyers typically don't keep their users safe from internet crime or harm. Engineers fix bugs and make a difference in terms of security. Having your lawyer respond doesn't keep users safe and doesn't get the bug fixed. It might do something to temporarily protect your brand, but that's only effective as long as the bug in question remains unknown to the media. Ultimately, when you try to kill the messenger with a bunch of lawsuits, it looks much worse than taking the steps to investigate and fix a security issue. Ideally, organizations recognize that fact quickly. It’s also worth noting that the law tends to be on the side of the organization, not the researcher reporting a vulnerability. In the United States, the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act have typically been used to harass or silence security researchers who are trying to report something along the lines of “if you see something say something.” Researchers take risks when identifying bugs, because there are laws on the books that can be easily misused and abused to try to kill the messenger. There are laws in other countries as well, that similarly would act as discouragement from well-meaning researchers to come forward. It’s important to keep perspective and remember that, in most cases, you’re talking to helpful hackers, who have stuck their neck out and potentially risked their own freedom to try to warn you about a security issue. Once organizations realize that, they're often more willing to cautiously trust researchers. First steps toward a basic vulnerability disclosure policy In 2015, market studies showed (and the numbers haven't changed significantly since then) that of the Forbes Global 2000, arguably some of the most prepared and proactive security programs, 94% had no published way for researchers to report a security vulnerability. That’s indicative of the fact that these organizations probably have no plan for how they would respond if somebody did reach out and report a vulnerability. They might call in their lawyers. They might just hope the person goes away. At the very basic level, organizations should provide a clear way for someone to report issues. Additionally, organizations should clearly define the scope of issues they’re most interested in hearing about. Defining scope also includes providing the bounds for things that you prefer hackers not do. I've seen a lot of vulnerability disclosure policies published on websites that say, please don't attempt to do a denial of service against our website, or against our service or products, because with sufficient resources, we know attackers would be able to do that. They clearly request people don’t test that capability, as it would provide no value. Learning by doing and the value of sharing experiences At the Cyber U.K. Conference, the U.K. National Cyber Security Centre’s (NCSC) industry conference, there was an announcement about NCSC’s plans to launch a vulnerability coordination pilot program. They've previously worked on vulnerability coordination through the U.K. Computer Emergency Response Team (CERT U.K.) that merged under NCSC. However, they hadn’t standardized the process. They chose to learn by doing and launch pilot programs. They invited focused security researchers, who they knew and had worked with in the past, to come and participate, and then they outlined their intention to publicly share what they learned. This approach offers benefits, as it's not only focused on specific bugs, but more so on the process, on the ways they can improve that process and share knowledge with their constituents globally. Of course, bugs will be uncovered and strengthening security of targeted websites obviously represents one of the goals of the program, but the emphasis on process and learning through experience really differentiates their approach and is particularly exciting.…
 
O
O'Reilly Security Podcast - O'Reilly Media Podcast
O'Reilly Security Podcast - O'Reilly Media Podcast podcast artwork
 
The O’Reilly Security Podcast: Threat hunting’s role in improving security posture, measuring threat hunting success, and the potential for automating threat hunting for the sake of efficiency and consistency. In this episode, I talk with Alex Pinto , chief data scientist at Niddel . We discuss the role of threat hunting in security, the necessity for well-defined process and documentation in threat hunting and other activities, and the potential for automating threat hunting using supervised machine learning. Here are some highlights: Threat hunting’s role in improved detection At the end of the day, threat hunting is proactively searching for malicious activity that your existing security tools and processes missed. In a way, it’s an evolution of the more traditional security monitoring and log analysis that organizations currently use. Experienced workers in security operation center environments or with managed security services providers might say, ‘Well, this is what I've been doing all this time, so maybe I was threat hunting all along.’ The idea behind threat hunting is that you're not entirely confident the tools and processes in place are identifying every single problem you might have. So, you decide to scrutinize your environment and available data, and hopefully grow your detection capability based on what you learn. There are some definitions, which I'm not entirely in agreement with, that say that, ‘It's only threat hunting when it's a human activity. So, the definition of threat hunting is when humans are looking for things that the automation missed.’ I personally think that's very self-serving. I think this human-centric qualifier is a little bit beside the point. We should always be striving to automate the work that we're doing as much as we can. Gauging success by measuring dwell time It's still very challenging to manage productivity and success metrics for threat hunting. This is an activity where it’s easy to spin your wheels and never find anything. There's a great metric called dwell time, which admittedly can be hard to measure. Dwell time measures the average time for the incident response team to find something as opposed to when the machine was originally infected or compromised. How long did it take for the alert to be generated or for the issue to be found via hunting? We’ve all heard vendor pitches saying something along the lines of, ‘Companies take more than 100 days to find specific malware in their environments.’ You should be measuring dwell time within your own environment. If you start to engage in threat hunting and you see this number decrease, you're finding issues sooner, and that means the threat hunting is working. The environments where I've seen the most success with threat hunting utilized their incident response (IR) team for the task or built a threat hunting offshoot from their IR team. These team members were already very comfortable with handling incidents within the organization. They already understood the environment well, knew what to look for, and where they should be looking. IR teams may be able to spend some of their time proactively looking for things and formulating hypotheses of where there could be a blind spot or perhaps poorly configured tools, and then researching those potential problem areas. Documentation is key. By documenting everything, you build organizational knowledge and allow for consistency and measurement of success. The potential for automating threat hunting There's a lot of different factors you can consider in deciding whether something is malicious. The hard part is the actual decision-making process. What really matters is the ability of a human analyst to be able to make a decision whether an activity is malicious or not and how to proceed. Using human analysts to review every scenario doesn't scale, especially given the complexity and number of factors they have to explore in order to make a decision. I’ve been exploring when and how we can automate that decision-making process, specifically in the case of threat hunting. For people who have some familiarity with machine learning, it appears threat hunting would fit well with a supervised machine learning model. You have vast amounts of data, and you have to make a call whether to classify something as good or bad. In any model that you’re training, you should use previous experience to classify benign activities to reduce noise. When we automate as much of this process as possible, we improve efficiency, the use of our team’s time, and consistency. Of course, It’s important to also consider the difficulties in pursuing this automation, and how we can try to circumvent those difficulties.…
 
O
O'Reilly Security Podcast - O'Reilly Media Podcast
O'Reilly Security Podcast - O'Reilly Media Podcast podcast artwork
 
The O’Reilly Security Podcast: How to approach asset management, improve user education, and strengthen your organization’s defensive security with limited time and resources. In this episode, I talk with Amanda Berlin , security architect at Hurricane Labs. We discuss how to assess and develop defensive security policies when you’re new to the task, how to approach core security fundamentals like asset management, and generally how you can successfully improve your organization’s defensive security with limited time and resources. Here are some highlights: The value of ongoing asset management Whether you're one person or you have a large security team, asset management is always a pain point. It’s exceedingly rare to see an organization correctly implementing asset management. In an ideal situation, you know where all of the devices are coming into your network. You have alerts set to sound if a new Mac address shows up. You want to know and be alerted if something plugs in or connects to your wireless network that you've never seen before, or haven't approved. You should never look at asset management as a box to check; it’s an ongoing process. Collaborate with your purchasing department—as they purchase PCs and distribute them, you should be tracking asset management at each step. And follow the same process when your organization gets rid of equipment. All laptops and servers eventually die; be sure to record those changes as well. This is important from a security perspective and also may save on software licensing so you're not paying for licenses for computers you no longer have. Budget-friendly user education A lot of people have computer-based phishing education once a year; it gets lumped in with things like learning how to use a fire extinguisher. That never sticks. People will click straight through the training, retake the test until they get the passing grade, and quickly forget about it. Instead, you need a repetitive process with multiple levels. The first step is to search the web to find email addresses in your system that are readily available on the web. Those should be your first targets because they are the most likely to be attacked by bots and other automatic phishing programs. Then move on to people in finance, database administrators, and other individuals with significant power within the organization. Send them a couple sentences of plain text and an internal link from a Gmail address to see if they give up their username and password. I have found that, before training, 60% to 80% of the employees targeted will click on the link. You should see clear progress over multiple levels of this training. Keep extensive metrics on the percent of people who clicked the emailed link, and the percent of people who gave their passwords, both before and after training. And be careful not to only identify “wrong behavior.” Place emphasis on educating staff about whom to contact if something seems weird and then provide positive reinforcement when they report suspicious activity quickly and effectively. Empowering your staff in this way provides quick, effective, and budget-friendly reporting. Preparation is key for incident response Incident response plans can be as simple or as complex as fits your organization’s needs. For some organizations, an incident response plan may be to shut everything off and call a third party for help. If you decide to go with a third party incident response plan, you should have that contract in place beforehand. If you wait until you’re in need of services immediately, you’ve no time or space for negotiating fees or comparing providers. You’ll also be facing an emergency situation and lose time by providing background on your systems to the third party. Putting a plan in place in advance, no matter how simple, will be cost effective, save time, and allow you to recover from an incident more efficiently and effectively. Other organizations may be able to manage a full-blown investigation internally, depending on the severity. Some places are advanced enough that they can reverse malware independently. Many places aren't. Regardless, you must know where to draw the line on stopping your incident response internally and getting someone external to come in and help. Once again, determining where that line is for your organization ahead of time is key. You don't want to have to make that decision in the middle of an incident.…
 
The O’Reilly Security Podcast: Key preparation before implementing a vulnerability disclosure policy, the crucial role of setting scope, and the benefits of collaborative relationships. In this episode, I talk with Kimber Dowsett , security architect at 18F . We discuss how to prepare your organization for a vulnerability disclosure policy, the benefits of starting small, and how to apply lessons learned to build better defenses. Here are some highlights: Gauging readiness for a vulnerability policy or a bug bounty program It’s critical to develop a response and remediation plan before you launch a disclosure policy. You should be asking, ‘Are we set up to respond to vulnerabilities as they come in?’ and ‘Do we have workflow in place for remediation?’ Organizations need to be sure they're not relying on a vulnerability disclosure policy to find bugs, vulnerabilities, or holes in their applications and code. It’s critical to ensure you have a mature, solid product in place before you open it up to the world and invite scrutiny. Additionally, vulnerability disclosure policies and bug bounty programs shouldn't be thought of as low-cost quality assurance. Code that hasn't been tested isn't viable for these programs. If your product hasn't been tested, torn apart, tested again, gone through pen tests, then it’s not ready, particularly for a bug bounty program. Even if you're ready for a vulnerability disclosure policy, there's a good chance you're not yet ready for a bug bounty program. Start small and proceed with caution If you don’t start small, there's a good chance you're going to get hit in ways that you're not prepared to handle, and probably with issues you'd never even considered. When we launched the 18F policy, we launched it with three sites and then rolled out additional sites as they were ready. If a team said to me, ‘Okay, we think we're good to go to be added to the disclosure policy,’ then we would review their pen test results, development, back end, and code reviews. It's a much slower process, but it returns better results. Going all in at the start and declaring that everything is in scope for your policy is shooting yourself in the foot. We have been cautious and we've had a very successful, slow rollout of vulnerability disclosure. We've proceeded with caution and that worked well for us. The benefits of building collaborative relationships When we confirm a vulnerability, our blue team explores how we would defended against it or ways we could defend it until remediation is complete. Then, our pen testers, security engineers, or developers look to add something about the vulnerability to their toolkits to test for similar insecurities as they are building apps. We really shoot for baked-in security, but there's always going to be a ‘gotcha.’ If researchers submit reports in meaningful ways, we are able to use that to save ourselves time and energy with the triage process, and move straight to determining the best defense and how to find and secure similar problems in the future. We’ve built a process that fosters collaborative relationships with researchers. When researchers make high-quality submissions, we ensure their discoveries are welcomed, and of course, responsibly disclosed. In a successful program, researchers have become part of the security process, as they’ve contributed in a meaningful way to the security of one of our applications. When researchers feel welcome, we all win.…
 
The O’Reilly Security Podcast: How adversarial posture affects decision-making, how decision trees can build more dynamic defenses, and the imperative role of UX in security. In this episode, I talk with Kelly Shortridge , detection product manager at BAE Systems Applied Intelligence. We talk about how common cognitive biases apply to security roles, how decision trees can help security practitioners overcome assumptions and build more dynamic defenses, and how combining security and UX could lead to a more secure future. Here are some highlights: How the win-or-lose mindset affects defenders’ decision-making Prospect theory asserts that how we make decisions depends on whether we’re in the domain of gains mindset or the domain of losses mindset. An appropriate analogy is to compare how gamblers make decisions. When gamblers are in the hole, they're a lot more likely to make risky decisions. They're trying to recoup their losses and reason they can do that by making a big leap, even if it's unlikely to succeed. In reality, it would be better if they either cut their losses or made smaller, safer bets. But gamblers often don’t see things that way because they’re operating in a domain of losses mindset, which is also true of many security defenders. Defenders, for the most part, manifest biases that make them willing to make riskier decisions. They're more willing to implement solutions against a 1% likelihood of attack rather than implementing the basics—like two factor authentication, good server hygiene, and network segmentation. We see a lot more defenders buying those really niche tools because, in my view, they're trying to get back to the status quo. They’re willing to spend millions on incident response, particularly if they've just experienced an acute loss, like a data breach. If they had spent those millions on basic controls, they likely wouldn't have had that breach in the first place. Planning dynamic defenses and overcoming assumptions with decision trees Defenders frequently have static strategies. They aren't necessarily thinking next steps in how attackers will respond if they implement two factor authentication, antivirus software, or whitelisting . Decision trees codify your thinking and encourage you to figure out how an attacker might respond to or try to work around your initial defenses, not just your first step. Different branches show how you think an attacker could move throughout your network to get to their end goal. By including your defensive strategies and the probability of success for each, you're essentially documenting your assumptions about how likely your defensive tools are to work, and how likely attackers are to use certain moves. That means if you have a breach or incident, or if you get new data on attacker groups, you can start to refine your model. You can identify where your assumptions might have fallen through. It keeps you honest with tangible metrics, which is important in addressing cognitive biases. Knowing where you failed improves your defenses. It shows how your assumptions need to be tweaked. Why security needs UX—and vice versa We've done a terrible job as an industry of incorporating UX into security design. People lament all the time, regardless of product, that security warnings aren't worded correctly. Either they scare users or people blindly click through them. No one seems focused on how to effectively incorporate security into product design itself. Designers or developers often view security as a complete nuisance—necessary but, in many ways, a hindrance. Security professionals often view UX as a waste of time, and blame insecurity on users who click on things they shouldn’t. Security and UX need to meet in the middle. This is an area that is ripe for opportunity and needs to be explored because it could make a meaningful change in the industry. Using UX to encourage users to make better or more secure decisions as they conduct their various IT activities would have a huge impact on security.…
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Panduan Rujukan Pantas

Podcast Teratas
Dengar rancangan ini semasa anda meneroka
Main