Dr. Checkmate, guest blogging over at Uncle Bears, writes:
-
Dr. Checkmate’s Ode to Savage Worlds | UncleBear
-
On a related note, d4 to d12 (or d4-2 to d12+2) doesn’t allow for a whole lot of granularity. You’re basically talking about all traits being on a scale of 1 to 5. Even some how making it a scale of 1 to 10 would be an improvement.
-
I know what he means about granularity, but my experience is that more than about five doesn’t actually make much of a psychological impact. Too fine a gradation, even if statistically significant, tends to get lost in people’s mental model of how things work. Despite D&D 3+ grading attributes on a 3-18 scale, what actually matters is the -2 to +4 that usable characters tend to end up with. Similarly, even though each Skill rank in D&D “matters”, the difference between 7 or 8 ranks in a Skill tends not to get noticed. Even in systems like Hero and GURPS, which have you rolling 3d6 against a stat, the bell-shaped curve means that some points are more equal than others. In my own home-brew before I switched to Savage Worlds I used a 1 to 10 scale for both Attributes and Skills, but realistically PCs had about 3-8 in anything the actually did (except for some combat monsters that I actually kind of wish weren’t so crocked). Having a smaller spread in the general stuff but extra Disadvantages/Advantages actually seems to help players think of the characters as having distinct strengths and weaknesses, as well as opening up more actually playable characters. E.g. middling Dexterity stat but Fumble-Fingers Disad giving a minus to fine manipulation is more memorable and easier to work with than an rock-bottom Dexterity score, which in many systems is a death-sentence.
I sometimes wonder if something like the seven-plus-or-minus-two rule is at work here. If a player can’t distinctly visualize all the steps at once, do they just chunk it until they can?
It’s been a while since I read savage worlds, but as I recall you’re mixing stats with skill levels here.
In Savage Lands that 1d4 to 1d12 range is used for both. While in contrast D&D even if it varies from -2 to +4 stat bonuses, varies in level from 1 to 20.
Thus the Granularity issue may have more to do with ‘skill level’ than stat mod…
@gleichman – I’m not sure what you mean, exactly. Maybe I should be clearer, but I’m making two different but related points: in terms of Stats, D&D 3+ is about the same level of granularity as Savage Worlds (actually having a stat with a -4 bonus in D&D is as rare or rarer than having a stat of d12+2 in SW, and there’s no practical difference between a 12 and a 13 in any stat). On the other hand, though Skill bonuses in D&D 3+ have a wider range (usually somewhere between +1 and +20), they have a much smaller impact, so the difference between having +7 and +8 in a Skill tends to blur in the players’ minds. I’ve updated the original post to try to make that clearer.
BTW, Brian, is that you? If so, wow, it’s been a while. Haven’t “seen” you since r.g.f.a days… what are you up to now? What are you playing?
Yep, it’s me. I’m not up to much new, still doing what I’ve always done except now I’ve adding some blogging to that. Still playing HERO System or Age of Heroes.
It’s the blogging that resulted in my finding this site. I must say, I like it. The Near/Far thing? Very nice.
Back to the article.
I think we’re talking pass each other on D&D here. As I recall a +1 to a strike chance from level is every bit the equal of a +1 from a stat…
Perhaps you’re speaking of the bonus in relation to the total? In which case, the +4 from stats pales to the +20 from skill out of the total of +24.
But if you mean that people will tend to lump mentally +15 and +16 from skill into about the same “I’m this good” grouping- you may have a point.
My only comment in that case, is that not everyone thinks that way. I put my first response forward as proof as I didn’t get it upon first look. Likely because I’m very used to think in far smaller terms.
Ah. See, I tend to think of +1 bonus from a stat as being a bigger deal, because it’s (e.g.) +1 to hit in melee, +1 damage, and +1 to every relevant skill check…even though the +1 added to the d20 is the same as a +1 from having a Skill rank. Plus, you can’t even put ranks into combat abilities, so that attack bonus for your stat is equivalent to a fistful of levels.
But mostly I’m thinking about the mental grouping that goes on. It’s not clear to me that, say, a percentile system is really perceived by the players as allowing for much more distinct characters than something like Savage Worlds or HERO with fewer steps but much more substantial and memorable modifiers. Or maybe it’s just the gamers I hang out with.
I’ve added your blog to my blogroll.
Well, one can always test that by randomly reducing various players by +1 or +2 and see what their reaction is 🙂
I feel it has a lot to do with how familar they are with the subject. For the ‘I show up and play’ gamer, I don’t doubt you’re right. But I think those who dig into the system tend to look closer.
I had great fun with this a while back. In AoH (a d100 system), one of my players asked what the cost for a well-made sword was. I quote a price normal and he laughed it off as seriously overpriced.
For the next few games I pointed out every failure by him and the NPCs that missed by 1% with serious results. It happened a number of times per session. He now wishes he had the money to buy the sword.
In D&D? Honestly, unless the GM made a point of it I’d be surprised if players noticed a -1 or -2 applied…but then I only have one player (maybe two) who I’d count among the “dig into the system” types.
One of my favorite anecdotes on this sort of thing from the real world is that pro golfers on the PGA tour were asked how many of their six-foot putts they thought they sank. Most guessed in the 80 to 90 percent range. In actuality most of them were sinking about half of their putts at that distance. The only guy who got it right was the best putter on the tour at the time, Ben Crenshaw. The site I linked to doesn’t mention the other part of the anecdote as I originally heard it, which was that Crenshaw not only knew his own percentage, he knew the percentages of most of the other top golfers which I think shows that it wasn’t that he was more gifted at estimating, but he actually bothered to track and calculate it.
I did a similar post a few months ago about the granularity in SW. I’d like to add that World of Darkness is 1 to 5 for stats and skills. Yes, I know you add them together giving a larger range. Remember Shadowrun, I think that was basically one to six.
The “numbers” for each game system should be interpreted based on that system. Basically, if you look at GURPS, Hero and D&D, they each have roughly the same range for stats but Strength 12 in each means totally different things. It doesn’t mean that any of them are right or wrong. Just different.
First, for D&D 3, while stat modifiers for starting characters are generally in the -2 to 4 range you mentioned, they quickly spread upwards. A starting character’s skills can range from -2 to 10 or so, with racial bonuses and feats, higher for certain skills (e.g. with size modifiers). The to-hit modifier for starting characters might range from -2 to 6, and armor class ranges from 8 to 18 or higher.
It is not unusual for higher level characters to have (adjusted) stat bonuses of 8 or higher, usually due to magic items.
More significantly, creature stats can be arbitrarily high, and things scale predictably. A giant might have a +10
or higher strength modifier.
I think a 5% difference in abilities is noticeable in D&D, especially for abilities that are frequently used, such as to hit, armor class, or skills like “hide”. How much depends on where the success probability lies. The difference between 45% and 50% to hit probabilities is less significant than the difference between minions having a 10% or 5% chance to hit you (which amounts to a 50% reduction in damage from them). Most D&D players will go to lengths to get a +1 bonus when success is important. (I’m stating these things descriptively, not prescriptively. IMO, it gets tedious how much time and thought in D&D 3 is spent seeking that +1 bonus.)
SW mechanics still baffle me. I’m not sure how much better a d6 is than a d4, especially when I realized that a d4 has a (slightly) better chance of getting that crucial 6 than a d6 does. Starting characters seem to be clustered in the d4 to d8 range, so I’m not sure there are really 5 common skill levels for starters. Monsters can be given that +1 or +2 on their d12,
but I’m not sure how much difference that makes, when it seems more important to have a small probability of a botch and a large probability of a critical success than it does to merely increase the average roll. There probably are discussions of SW probabilities and strategies out there, but I haven’t internalized them yet.
I’m not exactly sure where you get the concept of a “crucial 6” from in SW. The target number is usually 4, which a d6 will get 50% of the time, and a d4 25% (without worrying about the Wild Die).
It’s true the chance of rolling exactly 6 is slightly higher on a d4 than a d6, and that it a bit weird, but unless you’re trying to hit somebody with a d8 in Fighting or a man-sized target at Medium range you aren’t trying to beat exactly 6. And even then, the chance of getting a raise against a 6 (i.e. rolling a 10) is almost exactly double for a d6 than a d4. The probabilities aren’t perfectly smooth, but I don’t think a 1% hiccup here and there (e.g. the chance of getting a single raise against the standard TN of 4 with a d6 instead of a d8, which does come up frequently) is completely baffling.
As for D&D, I’ve no doubt that there are literally more distinctions that can be made than in SW, just as there are literally more in any percentile-based system than there are in D&D. But do they really matter to how the players think of the characters? I thought one of your dissatisfactions with D&D 3e was that optimized characters tended to look very much alike depending only on a few choices as to role adopted (e.g. two-weapons Ranger versus missile-weapon Ranger) and race and branch of the feat tree elected.
I’m not arguing for one way versus the other, or that there’s any right and wrong, I’m just curious whether there may be cognitive limits that tend to push people to using fewer distinctions than the game design technically allows for. That your D&D players would spend a long time trying to come up with that +1 bonus could cut either way: maybe they truly value each fine gradation, or maybe they over-value the shiny +1 not actually appreciating the difference between cutting their chance to be hit from 10% to 5% versus raising their chance of success from 90% to 95%….
I misremembered the success number in SW as 6, rather than the real value of 4. That makes 4 and 8 the breakpoints (although 6 is a common parry/toughness value, so 6 and 10 can be similarly important.) My main point is that, just as the mechanical distinction between +1 and +2 is situation dependent, so is the distinction between d4 and d6.
The psychological factor in D&D is bigger than the mechanical one. If the character has an 8 in Charisma, the player usually goes out of their way to play him or her surly or prickly, although the only mechanical effect is a -1 to certain skills. A character with an 18 Int is “the smartest guy in the room”, and will lord it over the characters with 16 and 14 Int.
The point-buy system flattens the curve in a number of ways. Attributes start at 8, and costs increase super-linearly.
An 18 (+4) usually is prohibitively expensive, so the results are one attribute at 16 (+3) and the rest in the -1 to +2 range. I use random rolls in my game, to give some chance of odd scores, (which can be improved at level 4 and 8) and some chance of very good or very bad scores.
However, the real pursuit of the +1 isn’t in initial character generation. You have a large number of choices about skills, feats, spells, and especially magic items. Usually, there are two types of criteria driving the choices. One is: in my specialty, that I’ll use all the time, I want my number to be as large as possible. So I’ll take a feat to get a +1 to make certain spells harder to save against, or +1 with a certain weapon, and then buy items to give me additional +1 for the same thing. Each +1 is significant, because it is increasing my “yield” by 10% against evenly matched foes, which are the bulk of the situations I’m engaged in. The other criteria is: making the risk for a certain move acceptable. If my chance of messing up my best spell if cast defensively is 20%, I just won’t do it, ever. So I choose either to ensure near automatic success at casting defensively, or I’ll find an alternate strategy to make sure I’m never trying to cast a spell in the front lines. So while there’s no difference between a +1
and a +7 in Concentration (because either way I’ll never cast defensively),
there’s a big difference between a +9 and a +10 (10% vs. 5% failure at something I will do routinely.) Then again, there’s no difference between +12 and anything higher. So I get the safe value, and stop investing. Each such option that becomes “safe” gives me a new schtick for my character, that many others won’t even try.
In SW, it is hard to get to safe points. I think a d12+2 is what you need to avoid a real chance of outright failure for a 4. (You only fail on a botch, (1,1) which is 1/72. If you only
have a d12+1, you fail if both dies are in the range 1,2, which is 1/18, and a d12, it’s 1-3, which is 1/8). On the other hand , D&D is less forgiving. I’ve had many rolls where failure would be death and almost every session there are rolls where failure takes your PC out of play for an hour or so.
I think the question I’m getting at isn’t whether the players use the system to pursue mechanical advantage, the question is whether they perceive characters as being distinct from each other based on such fine gradations. So Concentration bonuses between +1 and +7 are functionally identical, as are Concentration bonuses above +12–that’s a lot of distinctions being made without a difference already. Would your players view two characters identical in all respects except for a +9 vs a +10 Concentration as being really different? Do you think they should? Should we prefer game systems that make distinctions at that level, or even finer?
I think it works the other way around. If I decide that I am going to play a rash wizard that charges into the frontlines, I will make sure my Concentration is +10 (or soon be playing a different character.) If I am playing a cowardly wizard that always seeks a safe distance between herself and the enemy before casting spells, I will ignore Concentration entirely.
This is really a digression about how visible a +1 difference is in d20, not about the ideal number of distinctions. It is a flaw in d20 that, as you go up level, you have to recalculate what values you need in different skills to play your character consistently. Different skills have different breakpoints, and the breakpoints change over time.
Psychologically, players PERCEIVE a difference even between an INT 17 and an INT 16 character, where there is (almost) no mechanical difference. I’m not sure I can explain why, but I do it myself.
I don’t need a large number of gradations between human player characters: below average, average, above average and exceptional basically covers it. But I think there should be a very large number of gradations between Bilbo’s strength and Smaug’s.
I still don’t quite know how to translate SW stats into the intuitive scale. Is d4 average or below average (for people in general, it’s below average for PC’s)? Is a d4 skill below average? But most people don’t have the skill at all… I’m not saying that experienced SW players can’t make the translation, just that I can’t yet.
My impression is that a d4 stat is intended to be below average for the population. The Youth Hindrance (which makes you 8-12 years old) limits you to 3 points to adjust your attributes, but doesn’t drop any below d4. The Elderly Hindrance drops your Strength and Vigor a die type to a minimum of d4 and makes it impossible to raise them. It’s possible to have worse effective scores than d4 (for instance if you’re Anemic), but the attribute can’t go any lower. Further evidence is that a “Typical Citizen” in the SF and Fantasy Toolkits is listed as having a d6 in every stat.
As for Skills, I think that a d4 is supposed to represent having some ability (or else they wouldn’t have it at all), but not being particularly competent. A Typical Citizen might have a d4 in Shooting, and nothing in Fighting (for SF), or a d4 in Fighting (for Fantasy), but nobody seems to have less than a d6 in any Skill that’s part of their “job.” E.g. a Typical SF Merchant has d6 or better in Gambling, Notice, Persuade, Piloting, Repair, Shooting and Streetwise; a Snitch in a Pulp game has at least a d6 in Driving, Notice, Persuasion, Stealth and Streetwise.
SW would certainly be helped (at least for new players) by making such assumptions explicit, though I guess they at least avoided the problem that so many games have of declaring a certain score to be typical or competent when the game mechanics make that manifestly untrue.
As a player who’s known to get a little crunchy with numbers, I’ve thought a bunch about the concept of Granularity, and I think it really comes down to J’s point about psychological impact. The player cares about “Does this difference really make it more likely for me to succeed?”
Take for example, in DnD3, a +1 has much more impact to the player when it increases their actual hits by 50 or 100% (19 or 20 target) than when it only increases hits by 10% (11 target) or ~5% (3 target), even though statistically, your always increasing the odds of getting a hit by 5%.
For DnD3 skill rolls, I always mentally chunked skills in groups of 5 (around 5, around 10, around 15, etc.) because the skill check DCs were generally given in levels of 5. You had your “I make”, “I make a lot”, “50/50”, “hard”, and “lucky”.
Along with absolute range, however, is the difference between the fixed value range and random value range of a skill check. As a player, I like to at least have some idea as to “How skilled am I at this task?”
I use Ars Magica as my example for that problem. You would always roll an open-ended d10 for everything as your random modifier. For skills, you would add in your rank level. Novice was “1”, Master was “5”. When you start playing with that, you find that anything of any difficulty at all is either easy enough for anyone to try, or hard enough that a Master is going to fail a significant portion of the time. Casting magic, on the other hand, tended to have difficulty jumps of +5, if you came within 5 of your target you “succeeded but fatigued,” and your fixed modifiers tended to run in the +15 -> +30 range, making it very easy to determine whether something was hard, likely, or easy.