Cloud Security

Explore top LinkedIn content from expert professionals.

  • View profile for Danny Steenman

    Helping startups build faster on AWS while controlling costs, security, and compliance | Founder @ Towards the Cloud

    11,403 followers

    I've set up hundreds of AWS accounts for clients over the years. Here's your essential checklist when starting a new AWS account: 1. Delete default VPC, create a custom one 2. Set up budget alerts 3. Enable CloudTrail logs 4. Configure strong password policy 5. Enforce MFA for all users 6. Enable AWS Resource Explorer 7. Set up IAM roles and least privilege access 8. Enable AWS Security Hub for centralized security management 9. Implement tagging strategy for cost allocation 10. Enable AWS Organizations for multi-account strategy These steps establish a robust foundation for security, cost management, compliance, and scalability. Pro tip: Automate this process with Infrastructure as Code (IaC) tools like AWS CloudFormation, AWS CDK or Terraform. It ensures consistency and saves time on future setups. Which of these do you prioritize? Any crucial steps I missed? Share your thoughts!

  • View profile for Sean Connelly🦉
    Sean Connelly🦉 Sean Connelly🦉 is an Influencer

    Architect of U.S. Federal Zero Trust | Co-author NIST SP 800-207 & CISA Zero Trust Maturity Model | Former CISA Zero Trust Initiative Director | Advising Governments & Enterprises

    22,645 followers

    🚨NSA Releases Guidance on Hybrid and Multi-Cloud Environments🚨 The National Security Agency (NSA) recently published an important Cybersecurity Information Sheet (CSI): "Account for Complexities Introduced by Hybrid Cloud and Multi-Cloud Environments." As organizations increasingly adopt hybrid and multi-cloud strategies to enhance flexibility and scalability, understanding the complexities of these environments is crucial for securing digital assets. This CSI provides a comprehensive overview of the unique challenges presented by hybrid and multi-cloud setups. Key Insights Include: 🛠️ Operational Complexities: Addressing the knowledge and skill gaps that arise from managing diverse cloud environments and the potential for security gaps due to operational siloes. 🔗 Network Protections: Implementing Zero Trust principles to minimize data flows and secure communications across cloud environments. 🔑 Identity and Access Management (IAM): Ensuring robust identity management and access control across cloud platforms, adhering to the principle of least privilege. 📊 Logging and Monitoring: Centralizing log management for improved visibility and threat detection across hybrid and multi-cloud infrastructures. 🚑 Disaster Recovery: Utilizing multi-cloud strategies to ensure redundancy and resilience, facilitating rapid recovery from outages or cyber incidents. 📜 Compliance: Applying policy as code to ensure uniform security and compliance practices across all cloud environments. The guide also emphasizes the strategic use of Infrastructure as Code (IaC) to streamline cloud deployments and the importance of continuous education to keep pace with evolving cloud technologies. As organizations navigate the complexities of hybrid and multi-cloud strategies, this CSI provides valuable insights into securing cloud infrastructures against the backdrop of increasing cyber threats. Embracing these practices not only fortifies defenses but also ensures a scalable, compliant, and efficient cloud ecosystem. Read NSA's full guidance here: https://lnkd.in/eFfCSq5R #cybersecurity #innovation #ZeroTrust #cloudcomputing #programming #future #bigdata #softwareengineering

  • View profile for Thai Duong

    Chief at Calif | We're hiring calif.io/jobs

    11,275 followers

    You've probably seen the news: Oracle Cloud got popped, exposing 6 million records from over 140,000 tenants. The breach came to light after user "rose87168" dropped the loot on Breach Forums. The alleged attacker disclosed to Bleeping Computer that they used a known vulnerability to hit Oracle Cloud's SSO endpoint at login.<region>.oracle.com. Chances are, it was either CVE-2021-35587 or CVE-2022-21445. Both issues were discovered and reported by our very own Đức Nguyễn, together with Jang Nguyen, who's also joined our red team on many fun adventures. Duc found the bugs before he even joined the team. As Duc explained in his blog (link in comments), these are monster bugs, affecting a wide swath of Oracle products and companies. During their research, Jang and Duc even managed to pwn multiple systems under oracle.com, including the SSO endpoint at login.oracle.com (see the picture below). In 2023, we used the same vuln to compromise an Oracle BI instance buried deep inside a bank during a beautiful money heist simulation. Oracle products are notoriously complex, and Oracle is not exactly famous for fast patching. It took them more than six months to fix CVE-2021-35587 and CVE-2022-21445. Some deprecated product lines never got patches at all. As a result, many Oracle systems are left outdated and vulnerable. At this point, if you're running Oracle, it's probably safer to assume you're already breached, and plan your defense accordingly.

  • 🔐 RBAC vs. ABAC: Choosing the Right Access Control for Your IAM Strategy 🚀 In Identity and Access Management (IAM), controlling who can access what is critical. Two powerful approaches—Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC)—offer distinct ways to manage permissions. But which one fits your needs? Let’s break it down! 🧠 🔍 Role-Based Access Control (RBAC) What is it? Assigns permissions based on predefined roles tied to job functions (e.g., "Admin," "Developer"). Users inherit access through their roles. How it works: Admins define roles and assign users to them. Permissions are tied to roles, not individuals. Best for: Organizations with clear hierarchies and stable access needs (e.g., enterprise apps like Salesforce). Pros: Simple to implement and manage. Scalable for large teams with similar access needs. Supported by most IAM tools (e.g., Okta, AWS IAM). Cons: Less flexible for dynamic or complex access scenarios. Can lead to "role explosion" with too many roles. Example: A "Marketing" role grants access to social media tools but not financial systems. Fun Fact: RBAC is a staple in traditional enterprises for its straightforward approach! 🔑 Attribute-Based Access Control (ABAC) What is it? Grants access based on attributes (e.g., user’s department, location, time, or device) using dynamic policies. How it works: Policies evaluate attributes in real-time to decide access (e.g., "Allow access if user is in HR, in the UK, during work hours"). Best for: Dynamic, complex environments like cloud-native apps or zero-trust architectures. Pros: Highly granular and flexible for nuanced access needs. Adapts to context (e.g., location, risk level). Ideal for modern IAM platforms like Ping Identity. Cons: More complex to set up and maintain. Requires robust policy management and attribute data. Example: An employee can access sensitive data only from a secure device in the office. Fun Fact: ABAC’s flexibility makes it a go-to for zero-trust security models! ⚖️ Key Differences: Approach: RBAC uses static roles; ABAC uses dynamic attributes. Flexibility: RBAC is simpler but rigid; ABAC is flexible but complex. Use Case: RBAC suits structured organizations; ABAC excels in dynamic, cloud, or high-security settings. Scalability: RBAC is easier for broad access; ABAC scales better for fine-grained control. 💡 Why They Matter Together: RBAC offers simplicity for standard access, while ABAC provides precision for complex scenarios. Many IAM tools (e.g., SailPoint, Microsoft Entra ID) support both, letting you combine them for hybrid strategies. For example, use RBAC for employee apps and ABAC for sensitive data access. 🔥 Pro Tip: Start with RBAC for quick wins, then layer ABAC for high-risk or dynamic use cases. Tools like Okta or Saviynt make this seamless! Which do you use—RBAC, ABAC, or both? Share your IAM insights or challenges below! 💬 #Cybersecurity #IAM #RBAC #ABAC #Tech

  • View profile for Confidence Staveley
    Confidence Staveley Confidence Staveley is an Influencer

    Multi-Award Winning Cybersecurity Leader | Author | Int’l Speaker | On a mission to simplify cybersecurity, attract more women, drive AI Security awareness and raise high-agency humans who defy odds & change the world.

    99,438 followers

    Using unverified container images, over-permissioning service accounts, postponing network policy implementation, skipping regular image scans and running everything on default namespaces…. What do all these have in common ? Bad cybersecurity practices! It’s best to always do this instead; 1. Only use verified images, and scan them for vulnerabilities before deploying them in a Kubernetes cluster. 2. Assign the least amount of privilege required. Use tools like Open Policy Agent (OPA) and Kubernetes' native RBAC policies to define and enforce strict access controls. Avoid using the cluster-admin role unless absolutely necessary. 3. Network Policies should be implemented from the start to limit which pods can communicate with one another. This can prevent unauthorized access and reduce the impact of a potential breach. 4. Automate regular image scanning using tools integrated into the CI/CD pipeline to ensure that images are always up-to-date and free of known vulnerabilities before being deployed. 5. Always organize workloads into namespaces based on their function, environment (e.g., dev, staging, production), or team ownership. This helps in managing resources, applying security policies, and isolating workloads effectively. PS: If necessary, you can ask me in the comment section specific questions on why these bad practices are a problem. #cybersecurity #informationsecurity #softwareengineering

  • View profile for Vaughan Shanks

    Helping security teams respond to cyber incidents better and faster | CEO & Co-Founder, Cydarm Technologies

    12,061 followers

    Last week #NIST released three post-#quantum #encryption standards. Why is this significant? Put simply, from a practical standpoint: risk management and compliance. First, on risk management: experts now say that quantum computing is less than a decade away. Quantum computers are expected to have the power to search large keyspaces very quickly, which means they will be able to decrypt current encryption. Moreover, it is entirely plausible that encrypted information recorded today is being stored for decryption when quantum computing becomes available. If you speculatively apply quantum-resistant encryption to your data now, you will reduce the risk of an adversary being able to successfully exploit your data when they have access to quantum computing. Second, on compliance: NIST is the governing body for standards in the USA, and many other nations take their encryption standards from NIST, as they do not have resources at the same scale as NIST. You can be certain that NIST-approved post-quantum algorithms will start being mentioned in various compliance checklists, as is the case currently with algorithms such as AES-256 and SHA-256. Note well that these algorithms have #FIPS numbers associated with them - meaning "Federal Information Processing Standard". Briefly, the approved algorithms are: 🔒 ML-KEM, for encrypted key exchange, as FIPS 203 🔒 ML-DSA, for digital signatures, as FIPS 204 🔒 SLH-DSA, for stateless hash-based digital signatures, as FIPS 205 There is a fourth algorithm, FN-DSA, also used for digital signatures, that is expected to be released in the next year.

  • View profile for Kristof Kazmer

    Head of Solution Sales | ASE Tech | Uncompromised Solutions. Proven on Australia’s toughest stages | Cybersecurity | Managed Services | Data and Analytics

    8,769 followers

    🛠️ “If it ain’t broke, don’t fix it.” It’s a saying that works for a leaky tap or an old lawnmower…but not for cybersecurity. Imagine walking into this server room and being able to find a needle in a haystack, or a patch cable in forest. Sure, it might be easier to run a new cable, but when you continually ignore the root cause, this is what can happen. The same can be said about unpatched software, legacy servers, unsupported firewalls, they might look fine on the surface, but under the hood they’re one zero-day away from disaster. The truth is: 🔹 Cybercriminals love “if it ain’t broke” thinking. 🔹 End-of-life tech is their easiest way in. 🔹 And the cost of doing nothing? Often far more than the cost of upgrading. Let's addressed common myths with insights on ways to strengthen your cyber defences.✅ 1. Basic #cybersecurity training isn't enough: The focus should be on real life examples and higher level education to raise awareness 2. Zero-trust solutions are NOT all the same: Beware of vendors and their false promises (get references for your use cases). 3. Cloud providers do not secure by default: Adding layers of security is a MUST in the cloud. 4. Cyber security is everyone's responsibility: Like driving a bus, you need to bring everyone on the journey, it's not just IT. 5. More tools aren't always better: Streamlining your tech stack can reduce complexity. 6. Strong passwords alone aren't enough: Utilise Multifactor Authentication (MFA) where possible. 7. SMS-based MFA is vulnerable: Look for app or biometric based solutions. 8. Advanced tools can cause gaps: The human factor requires training and the implementation of processes. 9. Logins can still be compromised: Dynamic access control limits the blast radius. 10. Physical and virtual cybersecurity are just as important: Secure both the data and asset. 11. It's not "if", it's "when": Being proactive mitigates risk but does not eliminate them, have a response plan. 12. Quantum computers aren't a universal decryption tool: Be prepared though. 13. Secure you SaaS apps: Expecting the provider to secure your services leaves you vulnerable, include these in your security profile. 14. Humans make mistakes: By train your staff, you can apply them as your human firewall to secure your organisation. 15. Stay alert and ever present: Keep yourself updated on evolving threats. 16. Assume you will be breached: Test your detection and response capabilities. 17. Obscurity doesn't equal security: Robust measures are key, regardless of size. 18. Don't rely on vendors for compliance: Take responsibility for your data. 19. Cybersecurity is an investment, not a burden: It protects your reputation and finances. This #Cybersecurity Awareness Month, challenge the old mindset. ✅ Audit your legacy tech. ✅ Patch and replace what’s past its prime. ✅ Segment, monitor, and protect what can’t yet be retired. Need help? Reach out to the team at ASE Tech #ShitHappens #ThinkBeforeYouCluck

  • View profile for Jason Rebholz
    Jason Rebholz Jason Rebholz is an Influencer

    Securing the agentic workforce | Co-founder & CEO at Evoke Security | Former CISO & IR leader

    32,132 followers

    Even if your company isn’t building AI tools, one of your SaaS providers is. This introduces a brand new attack surface you didn’t sign up for. Here are five steps to manage your new risk: 𝗦𝘁𝗲𝗽 𝟭: 𝗜𝗱𝗲𝗻𝘁𝗶𝗳𝘆 𝗔𝗜 𝗨𝘀𝗮𝗴𝗲: I’ll spare you the adage of “you can’t protect what you can’t see.” It’s overplayed…but it’s also really important. You need to monitor both the known knowns, i.e., the third-party SaaS solutions that have already undergone your third-party risk management review, and the unknown unknowns, i.e., your Shadow AI. You know your users are signing up for AI tools and connecting them to your company data. What you don’t know is what tools they are. 𝗦𝘁𝗲𝗽 𝟮: 𝗘𝘀𝘁𝗮𝗯𝗹𝗶𝘀𝗵 𝗮𝗻 𝗔𝗜 𝗥𝗲𝘃𝗶𝗲𝘄 𝗣𝗿𝗼𝗰𝗲𝘀𝘀: If you have a third-party risk management process, great, you’re already halfway there. But you need to update it to include questions around AI. Like, what types of models is the third-party provider using? How are they securing their AI implementations? What risk/security assessments have they done against their AI implementation? How are they monitoring for malicious activity? Also, be sure to classify these SaaS apps based on what data/tools you feed it or that it has access to. Assume that something bad can come from the SaaS tool and think about what it has access to. You’ll get a pretty good sense of the risk from there. 𝗦𝘁𝗲𝗽 𝟯: 𝗦𝗲𝘁 𝗔𝗜-𝗨𝘀𝗮𝗴𝗲 𝗣𝗼𝗹𝗶𝗰𝗶𝗲𝘀: If you don’t have an acceptable use policy, now is the time to create it. Establish the rules of the road for what AI use is allowed and how it should be used. At a minimum, this should require employees to submit tools through the AI review process. You should also ensure that employees have a clear understanding of the type of data that can be used with these tools. It’s a business decision that comes down to what the AI tool will have access to (e.g., data, tools, etc.) and the level of risk you’re willing to tolerate. 𝗦𝘁𝗲𝗽 𝟰: 𝗠𝗼𝗻𝗶𝘁𝗼𝗿 𝗨𝘀𝗮𝗴𝗲: This is the blind spot for most organizations. After you complete the initial security review of a SaaS tool, you feel all warm and fuzzy that you’ve done the right things to validate the security. But guess what, security isn’t static. And like any person trying to find a new partner, that third party probably embellished their security controls. For any high-risk third-party tools, make sure to keep tabs on new AI features they’re adding and how they could impact your security. 𝗦𝘁𝗲𝗽 𝟱: 𝗘𝗱𝘂𝗰𝗮𝘁𝗲 𝗮𝗻𝗱 𝗘𝗻𝗮𝗯𝗹𝗲: When you find wins for tools that enable teams to work more efficiently, share that with the company. This is an opportunity to share what’s working and ensure it’s also secure along the way. ------------------------------ ✅ Follow me for the latest in the intersection between AI and security.  👆 Subscribe to my newsletter with the link at the top of this post.

  • View profile for Zinet Kemal, M.S.c

    Protecting kids online • Senior Cloud Security Engineer • TEDx Speaker • Multi-award winning cybersecurity practitioner • Author • Instructor • AIGP | CCSK | CISA | SecAI+

    36,555 followers

    2024 State of Cloud Security Study Key Insights A great morning read from Datadog ‘analyzed security posture data from a sample of thousands of organizations that use AWS, Azure, or Google Cloud.’ ↗️ Long-lived credentials -> remain a security risk, with 60% of AWS IAM users having access keys older than one year. Unused credentials are widespread, increasing attack surfaces across all cloud providers (AWS, Azure, GCP). Recommendation -> Shift to temporary, time-bound credentials & centralized identity management solutions. ↗️ Public access blocks on cloud storage increasing AWS S3 & Azure Blob Storage are increasingly using public access blocks, with S3 seeing 79% of buckets proactively secured. Recommendation -> Enable account-level public access blocks to minimize risks of accidental data exposure. ↗️ IMDSv2 adoption growing AWS EC2 instances enforcing IMDSv2 have grown from 25% to 47%, yet many instances remain vulnerable. Recommendation -> Enforce IMDSv2 across all EC2 instances & use regional settings for secure defaults. ↗️ Managed Kubernetes clusters Many clusters (almost 50% on AWS) expose APIs publicly, with insecure default configurations risking attacks. Recommendation -> Use private networks, enforce audit logs, & limit permissions on Kubernetes worker nodes. ↗️ 3rd-Party integrations pose supply chain risk 10% of third-party IAM roles are overprivileged, creating risks of AWS account takeover. Recommendation ->Limit permissions, enforce External IDs, & remove unused third-party roles. ↗️ Most cloud incidents caused by compromised cloud credentials Cloud incidents are often triggered by compromised credentials, particularly in AWS, Azure, & Entra ID environments. Patterns of Attack + Compromised identities + Escalation via GetFederationToken + Service enumeration + Reselling access + Persistence techniques Microsoft 365 -> Credential stuffing, bypassing MFA, & malicious OAuth apps for email exfiltration. Google Cloud -> Attackers leverage VPNs & proxies for crypto mining and follow common attack patterns. Recommendations -> Implement strong identity controls & monitor API changes that attackers may exploit. ↗️ Many cloud workloads are excessively privileged or run in risky configurations Overprivileged cloud workloads expose organizations to significant risks, including full account compromise & data breaches. Recommendation ->Enforce least privilege principles on all workloads. Use non-default service accounts with tailored permissions in Google Cloud. Avoid running production workloads in AWS Organization management accounts. The study shows improved adoption of secure cloud configurations -> better awareness + enforcement of secure defaults. However, risky credentials & common misconfigurations in cloud infrastructure remain significant entry points for attackers. P.s. use the info to strengthen your org cloud security posture. Full study report in the comment ⬇️ #cloudsecurity #cloudsec #cybersecurity

  • View profile for Tony Scott

    CEO Intrusion | ex-CIO VMWare, Microsoft, Disney, US Gov | I talk about Network Security

    13,653 followers

    After 40 years in tech leadership, I've noticed a costly blind spot in public cloud adoption. Many organizations initially tend to treat public cloud resources like an endless all-you-can-eat buffet, and then are shocked when the bill comes. Here's what typically happens: In an owned or co-located data center, adding another VM or application to existing infrastructure feels free. It's like having a five-bedroom house with only two occupied rooms. Adding a third resident doesn't increase your housing costs. But in the public cloud, you usually pay for every resource. Teams spin up new instances for temporary projects, create backup copies, or build staging environments. Then they forget (or neglect) to clean up when the need passes. Often, many of these environments are not managed by the formal IT organization or may be hidden as part of outsourced capabilities. This isn't just a cost issue, as these forgotten or neglected environments can pose serious security risks. While production environments usually get rigorous security controls, these temporary spaces often contain sensitive data with minimal protection. They become perfect targets for nation-state actors, cyber criminals, and sophisticated threat actors. I've seen this pattern when I was CIO at Microsoft, CIO at The Walt Disney Company, CIO at VMware, and across federal agencies when I was the CIO for the U.S. Federal Government. The convenience of instant provisioning makes it easy to accumulate forgotten resources that drain budgets and create often overlooked security risks. The solution isn't complex, but it requires discipline: Track every resource, implement clear cleanup protocols and guidelines, make sure there is management accountability, and treat cloud environments with the appropriate level of cybersecurity protection.

Explore categories