Why the Unity Input System Changed Everything I Knew About Player Controls
Here's the thing—I spent way too long fighting with Unity's old Input.GetKey() system before I realized I was doing it all wrong. You know that frustration when you finally get keyboard controls working perfectly, then someone asks "Hey, can I use my Xbox controller?" and you have to rewrite half your input code? Been there. That's exactly why Unity built the new Input System, and honestly, it's one of those changes that makes you wonder how you ever lived without it.
The new Unity Input System isn't just an update—it's a complete rethinking of how player controls should work in modern games. Instead of hardcoding "press Space to jump," you're now saying "the player wants to jump," and the system figures out whether that's Space, the A button on a gamepad, or even a tap on a touchscreen. Think of it like ordering coffee: the old system made you specify exactly which button on the espresso machine to press. The new system? You just say "I want an espresso," and it works whether you're at a café, using a home machine, or tapping an app on your phone.
What Makes the New Input System Different (And Why You Should Care)
The Unity Input System solves a problem every game developer hits eventually—making controls work across different devices without writing separate code for each one. The traditional Input class ties your logic directly to specific hardware. When you write Input.GetKey(KeyCode.Space), you're literally saying "check if the Space key is pressed." That's fine until you need gamepad support, or touch controls, or want players to rebind keys.
The new system abstracts the player's intent from the physical button. You define Actions like "Jump" or "Fire," and those Actions can be triggered by any input you bind to them. The system handles all the device-specific stuff behind the scenes, freeing you to focus on gameplay logic.
This is crucial for students building their first real projects. You're not just learning better syntax—you're learning a professional workflow that'll save you countless hours when you inevitably need to add Unity gamepad keyboard support or implement Unity input rebinding for your players.
The Building Blocks: Understanding Unity Input Action Asset and Core Components
Before we dive into code, let me break down the terminology. When I first opened the Input System package at CMU, these terms felt like alphabet soup, but they're actually pretty logical once you see how they fit together.
Input Action Asset: This is your central hub—a special .inputactions file that stores all your input configuration. Think of it as a database for everything related to player controls. It keeps your input logic separate from your game code, which is exactly what you want for clean architecture.
Action Map: A named collection of Actions that belong together. I always create separate Action Maps for different game states. For example, a "Player" map for movement and combat, and a "UI" map for menu navigation. The beauty here is you can enable or disable entire maps at once—perfect for switching between gameplay and pause menus.
Action: This represents what the player is trying to do—"Move," "Jump," "Interact." Actions are completely abstract. They don't care if you're using WASD or a joystick; they just care about receiving the right type of input data (a button press, a 2D vector, etc.).
Binding: The actual connection between an Action and a physical control. Your "Jump" Action might have one Binding to the Space key and another to the South Button on a gamepad. This is where the magic of device abstraction happens.
Control Scheme: A defined set of device requirements. You can create a "Keyboard&Mouse" scheme and a separate "Gamepad" scheme, and the system automatically switches based on what the player is using. This is essential for implementing robust Unity control schemes in your games.
PlayerInput Component: The MonoBehaviour that bridges your Unity Input Action Asset and your GameObjects. It handles device assignments and routes input events to your scripts, simplifying a lot of the heavy lifting.
How the Event-Driven Magic Actually Works
This one took me a bit to wrap my head around because it's fundamentally different from the polling approach you might be used to. Instead of checking button states every frame in Update(), the new Unity Input System is event-driven. You subscribe to events that fire when something happens with your Actions.
InputAction is the C# class representing an Action from your asset. Here's how you get a reference:
// Assuming 'playerControls' is an instance of your generated C# class
// from the Input Action Asset.
InputAction moveAction = playerControls.Player.Move;
Verified: Unity Docs - InputAction
Action Callbacks: Every Action has three main events—started, performed, and canceled. These are your Unity input callbacks. The performed event is the one you'll use most often, as it typically means the interaction completed successfully (like a full button press and release).
Here's how you subscribe to an event:
// Subscribe a function named 'OnJump' to the 'performed' event.
// The '+=' operator is used to add a listener to the event.
jumpAction.performed += OnJump;
// The function signature must accept an InputAction.CallbackContext parameter.
private void OnJump(InputAction.CallbackContext context)
{
Debug.Log("Jump action was performed!");
}
Verified: Unity Docs - Responding to Actions
Reading Values: The InputAction.CallbackContext object passed to your functions contains all the info about the input event. You use ReadValue<T>() to extract the data in the correct format:
private void OnMove(InputAction.CallbackContext context)
{
// Read the Vector2 value from the 'Move' action (e.g., from WASD or a joystick).
Vector2 moveInput = context.ReadValue<Vector2>();
Debug.Log($"Move input: {moveInput}");
}
Verified: Unity Docs - PlayerInput
Picking Your Poison: Unity Events vs C# Events
The Unity PlayerInput component gives you multiple ways to wire up your Actions to your code. After working on several projects, I've learned when to use each approach. Here's what actually matters:
| Criteria | Approach A: Invoke Unity Events | Approach B: Invoke C# Events |
|---|---|---|
| Best For | Beginners, rapid prototyping, and simple scenarios where visual setup in the Inspector is preferred over writing boilerplate code. | More complex projects, professional workflows, and situations where you need maximum performance and type safety, as it avoids string-based lookups. |
| Performance | Slightly slower due to the overhead of Unity's event system, which uses string comparisons and reflection to find and call the methods. | The most performant method, as it uses direct C# delegate callbacks, which are much faster than Unity Events. |
| Complexity | Very low complexity. You simply create a public method and link it to the event in the Inspector, with no manual subscription code required. | Higher initial complexity. It requires generating a C# class from the asset, implementing an interface, and setting up callbacks in code (OnEnable/OnDisable). |
| Code Example | // In your script: |
// In your script, after generating the C# class: |
For your first few projects, I'd recommend starting with Unity Events to understand the flow. Once you're comfortable, switch to the C# Events approach for better performance and maintainability.
Why This Matters for Your Game Projects
Let me be honest—adopting the Unity Input System adds a bit of initial complexity. But the payoff is massive, especially when you're trying to build something that feels polished and professional.
- Complete Device Abstraction: Write your gameplay logic once, and it works with any device you add bindings for—keyboards, mice, gamepads, joysticks, you name it. No more duplicate code for each input type.
- Effortless Player Rebinding: The system is designed from the ground up to support Unity input rebinding at runtime. Allowing players to remap controls is expected in modern PC games, and with this system, you can implement it with just a few lines of code.
- Simplified Local Multiplayer: The
PlayerInputManagercomponent makes Unity local multiplayer input incredibly straightforward. It automatically handles spawning players and assigning unique devices as they join—perfect for couch co-op games. - Cleaner, Event-Driven Code: By moving away from polling in
Update(), your code becomes more organized and efficient. Logic only runs when input actually occurs, rather than checking every single frame. - Context-Sensitive Controls: Unity action maps let you easily enable and disable entire sets of controls. Switching between character movement, driving a vehicle, navigating menus, or watching a cutscene becomes trivial.
Setting Up Your First Custom Input System
Here's my go-to workflow for setting up Unity custom input in any new project. I've done this dozens of times, and this is the exact method I use:
Generate a C# Class: In the Inspector for your .inputactions asset, tick the "Generate C# Class" option and apply. This is non-negotiable for me. It creates a type-safe wrapper that gives you IntelliSense-powered access to your maps and actions, preventing typos and errors.
// Instead of using strings, you can now do this:
private PlayerControls playerControls;
void Awake() {
playerControls = new PlayerControls();
playerControls.Player.Jump.performed += OnJump;
}
Verified: Unity Docs - Action Assets
Use Interfaces for Organization: When you generate the C# class, you can also generate an interface for each Action Map (like IPlayerActions). Implementing this interface forces you to create methods for every Action, ensuring you never forget to handle one. Trust me, this has saved me from countless bugs:
// Your class signature
public class PlayerController : MonoBehaviour, PlayerControls.IPlayerActions
{
private PlayerControls playerControls;
void OnEnable() {
if (playerControls == null) {
playerControls = new PlayerControls();
playerControls.Player.SetCallbacks(this); // Automatically links all interface methods
}
playerControls.Player.Enable();
}
// This method is required by the interface
public void OnJump(InputAction.CallbackContext context) {
// Jump logic here
}
// ... other required methods
}
Always Enable and Disable Your Actions: Actions and Unity action maps are disabled by default. You must enable them to receive input and disable them when they're no longer needed. The OnEnable() and OnDisable() methods are perfect for this:
void OnEnable() {
playerControls.Player.Enable();
}
void OnDisable() {
playerControls.Player.Disable();
}
Real Games That Nailed Custom Input Systems
I've spent a lot of time analyzing how professional studios implement input systems, and these examples are some of my favorites. Let me show you what makes them brilliant:
Hades
I've seen this technique used brilliantly in Hades. The combat feels equally fluid whether you're using a gamepad or keyboard/mouse, and that's not an accident.
The Mechanic: Fast-paced, responsive combat that seamlessly supports both input methods.
The Implementation: The game uses an abstract "Dash" Action. This Action has a Binding to the Space key for the "Keyboard&Mouse" Unity control schemes and another Binding to the South Button (A/X) for the "Gamepad" scheme. The core dash logic is written once and doesn't care which device triggered the Action.
The Player Experience: Players can switch between controller and keyboard mid-combat without interruption. The UI prompts update instantly to show the correct button for the active device. Here's the core logic—notice how it's completely device-agnostic:
// The core logic is device-agnostic
public void OnDash(InputAction.CallbackContext context) {
if (context.performed) {
// Execute dash logic
}
}
Microsoft Flight Simulator
One of my favorite implementations of this is in Microsoft Flight Simulator, especially for how it handles analog controls.
The Mechanic: Complex aircraft control using joysticks (pitch, roll, yaw), throttle quadrants, and rudder pedals.
The Implementation: The system uses "Value" type Actions for all analog controls. The "Pitch" Action, for example, is bound to the Y-axis of a flight stick. Processors are then used on these bindings to add dead zones and response curves, letting players fine-tune the sensitivity and feel.
The Player Experience: This provides an incredibly deep simulation. Players with complex hardware setups can map every real-world control to an in-game Action, while players with a simple gamepad can still access important functions through a simplified scheme.
It Takes Two
After analyzing dozens of games, this one stands out because of its frictionless local co-op implementation.
The Mechanic: Local co-op where two players use different controller types simultaneously (one on keyboard, one on gamepad).
The Implementation: The PlayerInputManager component manages player joining. When new input is detected from an unassigned device, the manager automatically instantiates a new player prefab and pairs that device to the new player's Unity PlayerInput component. This is the gold standard for Unity local multiplayer input.
The Player Experience: A second player can join by simply pressing a button on their controller—no menus, no setup, just instant drop-in co-op.
Building Your First Player Controller Step-by-Step
Let me walk you through creating a complete 2D/3D player controller. This is the exact approach I use when starting a new Unity project, and I've refined it over multiple implementations.
Scenario Goal: Control character movement and jumping using a Vector2 for direction and a Button for jump, compatible with both keyboard and gamepad.
Unity Editor Setup:
- Create an Input Action Asset named "PlayerControls".
- Create an Action Map named "Player".
- Inside "Player", create a "Move" Action (Action Type:
Value, Control Type:Vector2).- Add a Binding:
WASD [Keyboard](this is a pre-built composite). - Add another Binding:
Left Stick [Gamepad].
- Add a Binding:
- Create a "Jump" Action (Action Type:
Button).- Add a Binding:
Space [Keyboard]. - Add another Binding:
South Button [Gamepad].
- Add a Binding:
- In the Inspector for the asset, enable "Generate C# Class" and click Apply.
- Create a 3D Cube (or 2D Sprite) for your player, add a
Rigidbody(orRigidbody2D), and attach the Unity PlayerInput component. - Drag your "PlayerControls" asset into the
Actionsslot of thePlayerInputcomponent. - Attach the script below to your player object.
Step-by-Step Code Implementation:
Script Setup: Here's how I approach this—define movement speed, jump force, and store references to the Rigidbody and input vector:
// 3D Version
using UnityEngine;
using UnityEngine.InputSystem;
[RequireComponent(typeof(Rigidbody))]
public class PlayerController3D : MonoBehaviour
{
public float moveSpeed = 5f;
public float jumpForce = 5f;
private Rigidbody rb;
private Vector2 moveInput;
void Awake()
{
rb = GetComponent<Rigidbody>();
}
}
Receiving Input: We'll use methods automatically called by the Unity PlayerInput component (set its Behavior to "Invoke Unity Events"). These methods update our moveInput variable and trigger jump logic:
// 3D Version
// This method is called by the 'Move' Action event in the PlayerInput component
public void OnMove(InputAction.CallbackContext context)
{
moveInput = context.ReadValue<Vector2>();
}
// This method is called by the 'Jump' Action event
public void OnJump(InputAction.CallbackContext context)
{
// We only want to jump when the button is first pressed
if (context.performed)
{
rb.AddForce(Vector3.up * jumpForce, ForceMode.Impulse);
}
}
Applying Movement: In FixedUpdate (the correct place for physics calculations), we use the moveInput data to apply force to our Rigidbody:
// 3D Version
void FixedUpdate()
{
// Create a 3D movement vector, ignoring the y-axis from our 2D input
Vector3 movement = new Vector3(moveInput.x, 0f, moveInput.y);
rb.velocity = new Vector3(movement.x * moveSpeed, rb.velocity.y, movement.z * moveSpeed);
}
For 2D games: The logic is nearly identical, but you use Rigidbody2D (Rigid Body 2D - a physics component for 2D games) and Vector2 for physics:
// 2D Version
// [RequireComponent(typeof(Rigidbody2D))]
// private Rigidbody2D rb;
// ...
// public void OnJump(InputAction.CallbackContext context) {
// if (context.performed) {
// rb.AddForce(Vector2.up * jumpForce, ForceMode2D.Impulse);
// }
// }
// void FixedUpdate() {
// rb.velocity = new Vector2(moveInput.x * moveSpeed, rb.velocity.y);
// }
Verified: Unity Docs - PlayerInput
Adding Mouse Look to Your First-Person Game
Let's tackle first-person camera controls next. I always tell students to look at how classic FPS games handle this, and here's my go-to implementation:
Scenario Goal: Control camera rotation in first-person style using mouse movement.
Unity Editor Setup:
- Use the same "PlayerControls" asset from the previous blueprint.
- In the "Player" Action Map, add a "Look" Action (Action Type:
Value, Control Type:Vector2). - Add a Binding:
Delta [Mouse]. - Your main camera should be parented to your player object. Attach the script below to the main camera.
- Attach the Unity PlayerInput component to the parent player object.
Step-by-Step Code Implementation:
Script Setup: These are the exact settings I use—variables for sensitivity, current rotation, and cursor locking:
using UnityEngine;
using UnityEngine.InputSystem;
public class MouseLook : MonoBehaviour
{
public float mouseSensitivity = 100f;
public Transform playerBody; // Reference to the player's body transform
private float xRotation = 0f;
private Vector2 lookInput;
void Start()
{
Cursor.lockState = CursorLockMode.Locked;
}
}
Receiving Input: Create a public method OnLook to be called by the PlayerInput component's "Look" event:
public void OnLook(InputAction.CallbackContext context)
{
lookInput = context.ReadValue<Vector2>();
}
Applying Rotation: In Update, calculate rotation based on mouse input, sensitivity, and Time.deltaTime. We rotate the player body left/right (Y-axis) and the camera up/down (X-axis), clamping vertical rotation to prevent flipping:
void Update()
{
float mouseX = lookInput.x * mouseSensitivity * Time.deltaTime;
float mouseY = lookInput.y * mouseSensitivity * Time.deltaTime;
// Calculate vertical rotation and clamp it
xRotation -= mouseY;
xRotation = Mathf.Clamp(xRotation, -90f, 90f);
// Apply rotation to the camera (up/down)
transform.localRotation = Quaternion.Euler(xRotation, 0f, 0f);
// Apply rotation to the player body (left/right)
playerBody.Rotate(Vector3.up * mouseX);
}
Verified: Unity Docs - Responding to Actions
Switching Between Player and UI Controls Like a Pro
Here's a scenario that trips up a lot of beginners—handling pause menus properly. You need to disable player movement while the menu is open, and Unity action maps make this incredibly straightforward.
Scenario Goal: Switch between controlling a player and controlling a pause menu, ensuring player movement is disabled while the menu is open.
Unity Editor Setup:
- In your "PlayerControls" asset, create a second Action Map named "UI".
- Inside "UI", create a "Pause" Action and bind it to the
Escapekey. - Inside "UI", create a "Submit" Action and bind it to the
Enterkey. - On your Unity PlayerInput component, set the "Default Action Map" to "Player".
- Create a simple UI Panel for your pause menu and disable it by default.
- Attach the script below to a manager object in your scene (like a "GameManager").
Step-by-Step Code Implementation:
Script Setup: I've configured this dozens of times, and here's my tried-and-tested approach—references to the PlayerInput component, pause menu GameObject, and our generated PlayerControls class:
using UnityEngine;
using UnityEngine.InputSystem;
public class PauseManager : MonoBehaviour
{
public GameObject pauseMenuUI;
public PlayerInput playerInput;
private PlayerControls playerControls;
private bool isPaused = false;
void Awake()
{
playerControls = new PlayerControls();
}
void OnEnable()
{
playerControls.Enable();
}
void OnDisable()
{
playerControls.Disable();
}
}
Subscribing to Pause Action: We subscribe to the "Pause" action from our Player Action Map. When performed, we call a method to toggle pause:
// (Inside PauseManager script)
void Start()
{
// We listen for the Pause action on the Player map to open the menu
playerInput.actions["Pause"].performed += context => TogglePause();
}
Toggle Pause Logic: This is the core of the implementation. When called, it checks the current pause state, activates/deactivates the menu UI, and—most importantly—switches the active Action Map:
// (Inside PauseManager script)
void TogglePause()
{
isPaused = !isPaused;
if (isPaused)
{
pauseMenuUI.SetActive(true);
Time.timeScale = 0f; // Pause the game
playerInput.SwitchCurrentActionMap("UI");
Debug.Log("Switched to UI map");
}
else
{
pauseMenuUI.SetActive(false);
Time.timeScale = 1f; // Resume the game
playerInput.SwitchCurrentActionMap("Player");
Debug.Log("Switched to Player map");
}
}
Verified: Unity Docs - Switching Action Maps
Ready to Start Building Your First Game?
If you've followed along this far, you've got the foundation you need to implement professional-quality input systems in your Unity projects. But here's the thing—reading about game development only gets you so far. The real learning happens when you're building actual games, hitting real problems, and figuring out solutions.
That's exactly why I created courses at Outscal that take you from basic concepts to building complete, polished game experiences. We don't just teach theory—we walk you through creating real games, handling edge cases, and implementing the kind of professional workflows that studios actually use.
Ready to go from learning Unity to actually shipping games? Check out Mr. Blocks - Your First Unity Game and start building today. You'll go from zero to a complete, playable game, learning not just the Unity Input System, but all the other essential pieces that make a game feel professional.
Key Takeaways
Here's what you need to remember about implementing the Unity Input System:
- The Unity Input System abstracts player intent from physical hardware, letting you write gameplay logic once that works across keyboards, gamepads, and other devices without code changes.
- Unity Input Action Asset files centralize all your input configuration, keeping it separate from game code for cleaner architecture and easier maintenance.
- Use Unity action maps to organize controls by context (Player, UI, Vehicle) and switch between them to enable/disable entire control sets based on game state.
- The system is event-driven with Unity input callbacks (started, performed, canceled) instead of polling, making your code more efficient and organized.
- Always generate a C# class from your Input Action Asset for type-safe, IntelliSense-powered access to your Actions, preventing typos and runtime errors.
- Implement Unity control schemes to define device requirements (Keyboard&Mouse, Gamepad) and allow automatic switching based on player input.
- The Unity PlayerInput component bridges your Input Action Asset and GameObjects, automatically handling device assignments and event routing with minimal setup.
- Enable Actions in
OnEnable()and disable them inOnDisable()to properly manage resources and prevent unwanted input when scripts are inactive. - Use
ReadValue<T>()onInputAction.CallbackContextto extract input data in the correct format (Vector2, float, button states) from any device. - Professional games like Hades, Microsoft Flight Simulator, and It Takes Two leverage this system for seamless device support, Unity input rebinding, and Unity local multiplayer input.
Actually, wait—one more thing I want to mention. The learning curve on the Unity Input System can feel steep at first, especially when you're coming from the simplicity of Input.GetKey(). But I promise you, after you've set it up properly once or twice, it becomes second nature. And when you hit that moment where you add gamepad support to your game by just adding a single binding without touching any code? That's when it clicks, and you'll wonder why anyone would build input systems any other way.
Common Questions
What is the Unity Input System and why should I use it instead of the old Input class?
The Unity Input System is a complete replacement for Unity's traditional Input class, designed to handle modern multi-platform input requirements. Instead of hardcoding specific keys or buttons (Input.GetKey(KeyCode.Space)), you define abstract player intentions (Actions like "Jump") that can be triggered by any input device. This abstraction lets you support keyboards, gamepads, joysticks, and touchscreens with the same gameplay code, making it essential for professional game development.
How do I create a Unity Input Action Asset and what does it contain?
Create an Input Action Asset by right-clicking in your Project window and selecting Create > Input Actions. This .inputactions file acts as a central database for all your input configuration. It contains Action Maps (collections of related Actions like "Player" or "UI"), Actions (player intentions like "Move" or "Jump"), and Bindings (connections between Actions and physical controls). Think of it as separating your input configuration from your game code for better organization.
What are Unity action maps and when should I use them?
Unity action maps are named collections of Actions that are typically used together in a specific game context. Create separate Action Maps for different states: "Player" for gameplay movement and combat, "UI" for menu navigation, "Vehicle" for driving controls. The key benefit is you can enable or disable entire Action Maps at once using playerInput.SwitchCurrentActionMap(), making it trivial to switch between controlling your character and navigating a pause menu.
How do Unity input callbacks work and which one should I use?
Every Action has three Unity input callbacks: started (input begins), performed (interaction completes), and canceled (input stops). For button presses, use performed—it fires when the button is fully pressed and released. For continuous inputs like joystick movement, use performed to read values as they change. Subscribe to these events using += syntax: jumpAction.performed += OnJump;. This event-driven approach is more efficient than checking input every frame in Update().
What is the Unity PlayerInput component and how do I set it up?
The Unity PlayerInput component is a MonoBehaviour that connects your Input Action Asset to your GameObjects. Add it to your player object, drag your .inputactions asset into its Actions field, and set the Behavior mode (usually "Invoke Unity Events" for beginners). It automatically handles device assignments, routes input events to your scripts, and can manage Unity control schemes. It's the bridge that makes the entire system work without writing manual device management code.
How do I implement Unity gamepad keyboard support for the same Action?
When creating an Action in your Input Action Asset, add multiple Bindings—one for each device. For a "Jump" Action, add a Binding to Space [Keyboard] and another to South Button [Gamepad]. The system automatically detects which device the player is using and triggers your Action from either input. Your gameplay code stays exactly the same regardless of the device, which is the entire point of device abstraction.
Can I implement Unity input rebinding to let players customize controls at runtime?
Yes, and it's surprisingly straightforward. The Unity Input System was designed with Unity input rebinding in mind. Use the InputActionRebindingExtensions class and methods like PerformInteractiveRebinding() to let players reassign any Binding at runtime. The system handles saving and loading these custom bindings. This feature is expected in modern PC games, and the Input System makes it accessible even for student projects.
How do I set up Unity local multiplayer input for split-screen games?
Use the PlayerInputManager component for Unity local multiplayer input. Add it to a manager object in your scene, assign your player prefab (with a Unity PlayerInput component attached), and set the joining behavior to "Join Players When Button Is Pressed." When a new device provides input, the manager automatically instantiates a new player and assigns that device exclusively to them. This creates effortless drop-in/drop-out local co-op without complex device management code.
What are Unity control schemes and how do they help with device switching?
Unity control schemes define required devices for an Action Map. Create a "Keyboard&Mouse" scheme requiring a keyboard and mouse, and a "Gamepad" scheme requiring a gamepad. The system can then automatically switch active schemes based on the last device the player used, updating UI prompts and control layouts accordingly. This is crucial for games that support seamless switching between input methods, like Hades switching between controller and keyboard mid-combat.
How do I read input values from different device types using the same code?
Use context.ReadValue<T>() in your callback methods, where T matches your Action's Control Type. For a Vector2 movement Action, use Vector2 moveInput = context.ReadValue<Vector2>();—this works whether the input comes from WASD keys or a gamepad thumbstick. For button Actions, check context.performed or context.canceled. The Unity Input System handles converting device-specific input into the format your Action expects, so your code stays device-agnostic.
Do I need to enable and disable Actions manually, and why?
Yes, Actions and Action Maps are disabled by default and must be manually enabled to receive input. Call playerControls.Player.Enable(); in OnEnable() and playerControls.Player.Disable(); in OnDisable(). This practice saves system resources by not processing input when scripts are inactive, prevents unwanted input during state transitions (like loading screens), and gives you explicit control over when your game responds to player input.
What's the difference between using Unity Events vs C# Events with the PlayerInput component?
Unity Events (set via "Invoke Unity Events" behavior) let you wire up Actions visually in the Inspector—great for beginners and rapid prototyping. C# Events (set via "Invoke C# Events" behavior) require generating a C# class from your asset and subscribing to events in code, but offer better performance and type safety. For your first projects, start with Unity Events to understand the flow; for production projects, use C# Events with generated interfaces for optimal performance.