The relationship between mouse CPI (DPI), in-game sensitivity and screen resolution is a rather curious one for many people – how do these three things work together and which settings are best for optimal performance? This article aims at helping you to understand the underlying technical concepts as well as dispelling some urban myths (so-called ‘pixel skipping’).
Note: Although I’ve tried my best to keep things as simple and easy to understand as possible, some degree of technicality is impossible to avoid when covering this topic.
First, let’s give some preliminary definitions of the three main terms we’ll be working with below: (1) CPI, (2) sensitivity and (3) resolution. CPI is short for ‘counts per inch’. Counts are the base units the sensor in a computer mouse is working with. CPI describes the number of counts that are being registered by the sensor when physically moving that sensor (or rather the mouse) exactly one inch. A mouse set to 400 CPI will, therefore, register 400 counts when being moved one inch. Sensitivity means the in-game sensitivity, i.e. the sensitivity of your in-game cursor motion that can be set within the game. This in-game sensitivity is typically independent of the OS sensitivity, which is the sensitivity that is being used on the desktop. Lastly, resolution refers to the screen resolution, which is the internal resolution your PC is working with, which can be different from the display resolution. For example, it is possible to display a screen resolution of 720p on a 1080p display by performing resolution scaling. The display resolution is the physical resolution of your display (i.e. the number of physical pixels your display has) whereas the screen resolution is a non-physical quantity. Whenever ‘pixels’ are being talked about below their non-physical variety is referred to.
Now, imagine you’re moving your 400 CPI mouse exactly 1/400 inch, which results in a single count being registered. This single count will first be processed by the mouse and then transmitted to your PC, where it will be interpreted by your OS. If you’re on the desktop, the journey of that count ends right there – if you’re in a game it will also be interpreted by the game you’re playing. On the desktop you’re in a 2D environment: you’re moving a cursor along a flat 2D plane that has clear boundaries – once you’ve reached the edges of your screen the cursor will no longer move, despite further physical mouse movement. In Windows, the way your cursor behaves is controlled by the Windows cursor settings, which can be adjusted within the control panel. The default slider allows one to adjust the cursor sensitivity between 11 different levels that represent varying multipliers (curiously, the Windows registry offers 20 levels instead of 11). When set to the sixth ‘tick’ (6/11), the multiplier will be 1, leading to a 1-to-1 translation between physical mouse movement and on-screen motion. In this case, one count (from the mouse) will be translated to one pixel of on-screen motion. When set to a multiplier below 1, some counts will be omitted, whereas a multiplier greater than 1 will lead to some counts being doubled in order to achieve the specified cursor sensitivity. For example, if the multiplier is set to 2 (11/11), every count will be translated to an on-screen motion of 2 pixels. Since one count is the smallest possible input from the mouse (i.e. you cannot move ‘less than 1 count’), a multiplier of 2 would result in every second pixel being skipped. In other words, a multiplier greater than 1 inevitably leads to a loss of precision as certain pixels are impossible to reach with the desktop cursor. This phenomenon can be aptly termed pixel skipping.
The upshot of this is twofold: First, it is advised to simply set the Windows sensitivity slider to 6/11 for a 1-to-1 response between mouse and on-screen movement on the desktop (along with disabling ‘Enhance pointer precision’). Second, due to the direct relationship between counts and pixels higher desktop resolutions will require higher mouse CPI. For example, using a 400 CPI mouse in conjunction with a 1440p display will result in egregiously slow cursor movement.
Another thing that can be noted at this point is that said direct relation between counts and pixels (and hence the danger of pixel skipping) is true for all 2D environments (this includes 2D games such as StarCraft). However, it is not true for 3D environments. Let’s get into it.
When you think of 3D games, there are usually two things that can be differentiated at a basic level: The world in front you, which encompasses everything there is and your perspective on this world (your point of view), which restricts what you’re seeing at any given time. The same basic principle applies to movies or photographs: There is the point of view of the camera and the world that is being viewed. In a movie, the world is fixed whereas the camera is moving, capturing whatever slice of reality. In games, however, this principle is reversed: The camera (your point of view) is fixed whereas the world around you is being moved. The term for this is an inverted world transform matrix (or short view matrix). For convenience, we’ll be calling any world movement camera movement below.
This kind of camera movement can be thought of as a rotation. The base axis of this rotation is the camera itself. Rotation is based on angles, and the base unit for measuring angles in radians. For convenience, we’ll be using degrees instead of radians below. For rotating the camera games typically use a base radial unit that ‘corresponds’ to one count. As such, for one count registered by the mouse, the view matrix will be shifted by the amount specified by the base radial unit (in degrees) multiplied with the sensitivity. Much like with the Windows sensitivity slider explained above sensitivity acts as nothing but a multiplier here as well. Here’s an example for how it works in practice: In Quake (one of the first FPS games) the base radial unit is called yaw (for the x-axis) and pitch (for the y-axis) and measures exactly 0.022°. For every count received from the mouse, the view matrix will be shifted by 0.022° given an in-game sensitivity of 1. If the sensitivity is set to 2, the view matrix will be shifted by 0.044°, whereas at a sensitivity of 0.5 the view matrix will be shifted by 0.011°. The formula for this can, therefore, be defined as follows: Effective sensitivity df= (yaw/pitch*sensitivity multiplier). As you can see, the base radial unit merely defines how the sensitivity scales, i.e. using a smaller base radial unit will result in higher sensitivity multipliers whereas a larger base radial unit will result in lower sensitivity multipliers. For example, Overwatch has a yaw/pitch of 0.0066°, which is why the sensitivity multipliers are higher than in CSGO, which is using a yaw/pitch of 0.022° (like Quake).
Now, where does mouse CPI enter the equation here? As explained earlier, the base radial unit ‘corresponds’ to one count. This means that when you move your 400 CPI mouse exactly one inch, the view matrix will be shifted by exactly 400*(base radial unit*sensitivity multiplier) – it’s as simple as that. So if your base radial unit is 0.022°, your sensitivity multiplier is 2 and your CPI is 400, the view matrix will be shifted by 17.6° (400*0.022*2). We can then further calculate (360/17.6) to get the number of inches it takes to perform a full revolution: 20.45 inches (~51.95 cm). This is called the turn circumference, i.e. the distance of physical mouse movement it takes to perform a full 360° rotation within the game.
The relation between the number of counts and the effective sensitivity (as defined above) is again inversely proportional: The lower the CPI, the higher the effective sensitivity needs to be in order to achieve the same set turn circumference. Conversely, the higher the CPI, the lower the effective sensitivity needs to be in order to achieve the same set turn circumference. The important part is that the base radial unit (yaw/pitch) is constant, so the only variable here is the in-game sensitivity. If you have lower CPI you need a higher sensitivity and if you have higher CPI you need a lower sensitivity to get the same turn circumference. Why is that important? Let’s take a look at the following thought experiment.
Imagine you want your turn circumference to be exactly 10.39 cm (a very high effective sensitivity) in Quake (yaw/pitch of 0.022°). Furthermore, your mouse only supports 400 CPI. In order to achieve that turn circumference, you have to set your sensitivity multiplier to 10. Now imagine that you want to turn your point of view as little as possible. You move your mouse very little, just enough for it to register a single count. This single count being registered will result in an on-screen view matrix shift of 0.22°, which is quite large. The view matrix will ‘jump’ a fair distance on the screen in a single step. The point here is that this view matrix shift will be the upper limit of your maximum possible accuracy. Since a single count being registered is the smallest possible input from the mouse it will be impossible to move the cursor to a position between the starting and ending point of said ‘jump’. The smallest possible view matrix shift will be 0.22°, any smaller view matrix shifts you may intend to perform are impossible to achieve. Compare this to the same situation with a mouse set to 1600 CPI. In order to achieve the same desired turn circumference of 10.39 cm you only need a sensitivity multiplier of 2.5 this time, which will result in a minimum possible view matrix shift of 0.055°, which is a lot finer.
The phenomenon just described is what people mean when they talk about so-called ‘pixel skipping’. As we have seen, however, there are no pixels being ‘skipped’, as the set resolution is entirely irrelevant when it comes to sensitivity for games using an inverted world transform matrix (a 3D environment). Camera movement is done by employing angles, not pixels. A view matrix shift of 0.22° will be a view matrix shift of 0.22° no matter whether the resolution is 720p or 2160p. The apt term for this phenomenon is, therefore, angular granularity, which describes how fine or coarse the rotation (the camera movement) will be. Using low CPI with a high sensitivity will result in lower angular granularity whereas using high CPI with a low sensitivity will result in higher angular granularity, given an identical set turn circumference.
Now, this doesn’t mean that you should turn up your CPI to the highest value possible in order to achieve the highest possible angular granularity. The following example should make it obvious why. Imagine that you want your turn circumference to be 51.95 cm instead of 10.39 cm in Quake. Given the same 400 CPI mouse as before your sensitivity multiplier would have to be 2 instead of 10 in this game. The resulting minimum view matrix shift would then be 0.044°, which is finer than the minimum view matrix shift in the 1600 CPI example above. The conclusion from this is that angular granularity becomes less of an issue the higher your desired turn circumference is (or, to put it more simply, the lower your effective sensitivity is). Of course, it is always possible to increase CPI while decreasing sensitivity in order to increase the angular granularity and get even finer (‘smoother’) rotation, but after a certain point, the gain in angular granularity is no longer perceptible (let alone performance relevant). If you haven’t noticed your rotation being ‘jerky’ or ‘not smooth’ in games so far, chances are the angular granularity is already sufficiently high. As a general rule of thumb, I’d say it’s advised to use at least 1600 CPI at a turn circumference <10 cm, at least 800 CPI at a turn circumference of 10 cm<25 cm and at least 400 CPI at a turn circumference >25 cm for sufficiently high angular granularity, but these are just rough estimates and ultimately subjective. It is absolutely not advised to increase CPI to unnecessarily high levels just to avoid any ‘pixel skipping’ (while possibly increasing smoothing levels), which (as we have seen) only exists in a specific sense anyway.