Hi Jennifer,
When you divide mean differences by the standard deviation you are standardizing the values. That is, you are expressing the values as deviations from the mean in standard deviation units (which are referred to as Z scores).
As an example, say the mean of a data set is 50 with a standard deviation of 5. And you have a score of 60. That means you are two standard deviations above the mean, so you would have a Z score of 2 (i.e., (60-50)/5)).
When you standardize a set of scores like that, the mean will become 0 with a variance/standard deviation of 1. So it is not that you can't have a Z score more than 1 or less than -1. For instance, in the example above, you would have a Z score of 2. What that means is that you are 2 standard deviations above the mean. If you had a score of 55 instead of 60, you would have a Z score of 1 (i.e., (55-50)/5)).
The reason why it can be informative to standardize values like this, is because it communicates where in relation to the mean a particular value falls without having to understand the original scale. That is, I may not know what your score of 60 means if you told me that is the score you received (it could be a bad or good score). But if you tell me that you have a Z score of 2, I know that that means you are 2 standard deviations above the mean. If it is a score on a measure where higher scores are good (e.g., a test grade), I would know that your test grade was very high since you scored a full 2 standard deviations above the class mean.
I hope this explanation helps! Feel free to message me if anything wasn't clear or if you want additional description of standard scores and why they can be useful.
-Mike