Organizations aiming to strengthen their cybersecurity posture often start with penetration testing, a well-defined process that identifies vulnerabilities in systems and networks. However, as threats evolve, many are transitioning to red teaming — a more comprehensive and adversary-focused approach that tests not just technology, but also people and processes.

While pentesting is usually more tightly scoped and time-boxed, red teaming simulates real-world attacks with minimal prior knowledge, operating stealthily to test detection and response capabilities. This transition, while powerful, is fraught with pitfalls that can diminish the value of red team engagements if not carefully navigated, especially for assumed breach scenarios where additional setup is required.

Below are the five most common pitfalls teams encounter when moving from pentesting to red teaming—and how to avoid them.

1. Poor Naming Conventions

One of the most glaring red flags in a red team engagement is when assumed breach users are named something like redteam01 or contain the security vendor’s name (e.g. CERTAINITY) or the use case of the user in their description or metadata. These choices immediately reveal the synthetic nature of the activity to blue teams, removing the element of surprise and realism.

Best Practice: Use realistic, innocuous naming conventions that blend in with the target environment. Clone the naming schemes used by HR-created accounts or service users. Metadata hygiene is crucial to avoid a SOC spotting that the assumed breach user is an anomaly in the environment.

2. Unrealistic Access

A common misstep when moving from pentesting to red teaming is handing the red team a brand-new “assumed-breach” account with no historical footprint or almost no permissions at all — a clean slate that no real attacker would ever inherit.
Real adversaries thrive on existing email conversations, SharePoint links, chat histories, and shared drives to plan lateral movement and craft believable phishing. A fresh account offers none of that. Without mailbox access, group memberships, or SaaS licenses, the operator must first spend valuable time acquiring the very basics that everyday employees already possess, skewing the test toward noise and artificial hurdles.

Best Practice: Base assumed breach accounts on actual user personas and business roles.
Include realistic permissions like mail access, M365 licenses, VPN capabilities, and internal system roles. This ensures that follow-on actions (e.g., phishing, lateral movement) are meaningful and mirror real-world threat actor behavior.
One effetive approach, especially if you perform engagements periodically, is to randomly sample an existing user and clone its permission set. Considering the previous point, also ensure that the metadata of the assumed breach user fits the copied role and permissions.
By mirroring real-world entitlements instead of issuing blank-slate accounts, you give the red team an authentic launching pad and the blue team a fair, high-fidelity test of their detection and response capabilities.

3. Overstuffed White Teams

In red team operations, the white team (those aware of the exercise and responsible for coordination) must remain lean. In small or mid-sized companies, overpopulating the white team often results in information leakage or reduced realism as the white team and blue team are almost congruent.

Best Practice: Keep the white team as small as operationally feasible, ideally limited to a trusted few who can coordinate safely and discreetly.
If the IT staff responsible for creating user account cannot be separated from the IT security staff, consider including HR in the white team instead and initiate the users creation like they would be created for an actual new employee. Reducing the number of people in the know preserves the exercise’s integrity and tests detection and response more effectively.

4. Neglecting Physical & Social Dimensions

Penetration tests are purely technical engagements of applications and networks.
Companies often translate this approach to red team engagements only including technical exploitation steps in the scope. However, evaluating real-world attacks highlights that social engineering (e.g. phishing) is by far the most effective way of compromising companies.
Our experience also has shown that even customers with a very high maturity in securing their virtual presence are often vulnerable to physical intrusions. Limiting attack simulations to technical exploitation results in an ineffective hybrid of penetration testing and red teaming missing the critical angle of physical presence and social engineering.

Best Practice: Periodically include physical and social vectors in your planning. Also consider using actual laptops rather than VMs in assumed breach scenarios for more realistic simulation of a compromised endpoint.

5. Scoping Too Tightly

Especially without prior experience in red teaming, allowing broad “attacks” can be scary for companies. It’s tempting to over-scope red team engagements to keep things “safe”. But by defining too narrow a mission, you may limit the red team’s ability to demonstrate impact, creativity, or lateral thinking—key traits of real-world attackers.
A particularly contradictory pattern emerges when organizations exclude entire subnets, environments, or cloud tenants from testing because they’re “afraid something might break”.
Ironically, these same systems — often legacy, sensitive, or poorly maintained — are exactly the ones most likely to harbor critical vulnerabilities.

Best Practice: Define objectives, not paths.
For example: “Access sensitive HR records,” rather than “Use user A to pivot into server B.” Broader scoping allows the red team to explore unexpected attack paths, providing more valuable insights into organizational resilience. While safety and availability needs to be considered, e.g. for networks with IoT devices related to critical infrastructure or large machinery, try to keep the scope as broad as possible.
To limit potential impact, provide information to the red team, which systems need to be evaluated with care and/or define time-frames where critical systems can be tested without disturbing operations.

Conclusion

The journey from penetration testing to red teaming represents a shift from checking boxes to challenging assumptions. But to truly benefit, organizations must avoid common missteps that can weaken the realism and value of engagements. By focusing on authenticity, minimal disclosure, realistic access, broader scoping, and occasionally touching the physical and social layers, red teamers can better simulate advanced threats—and help defenders sharpen their edge.