Page 1 of 2 12 LastLast
Results 1 to 10 of 14
  1. #1

    [Suggestion] Avoiding resource screw through biased shuffling

    Disclaimer: this post is long, deals with shuffling, and is written in poor English. Read at your own risk.

    In the recent survey about Hex I responded to, I stated that I did not find the Hex resource system very good. Not because of the threshold and cost system, which is in my opinion an improvement over MTG's -- having to exhaust cards to "use" resources would be uselessly tedious in a digital card game, plus the threshold mechanic is interesting in itself. No, the problem I think lies in how often the player is screwed. Hex behaves exactly like MTG with regards to resource distribution in the deck, and therefore suffers from the same resource screw problems. Which is too bad: given that this aspect is one of the most common criticisms of MTG, one might have expected a newly designed card game such as Hex to somehow address it.

    But if WoTC over the years did not manage to solve the screw problem in MTG, can we really expect the much smaller Hex team to solve it by themselves?

    The answer, I think, is yes. What is holding MTG back is that it is a physical card game in the first place. In the rules, it is stated that the players must be able to shuffle their decks "unassisted", which is of course understandable from an IRL tournament organisation perspective. But Hex, as Cory stated it again and again, is a purely digital TCG and does not suffer the same limitations. The deck shuffling part is already "assisted" (done by the servers), and therefore does not have to be unbiased. Changing the shuffler behavior is probably the least intrusive way of reducing screw, since it allows to keep all the other aspects of the gameplay as they currently are.

    There are already several threads discussing both the shuffler and resource screw in Hex. That is why I'll try to avoid simply restating the arguments that have already been made, but instead focus on describing a practical implementation of a biased shuffler designed correctly to my eyes, and provide the data I gathered by simulating its behavior.

    I had 5 objectives when designing the biased shuffler I describe in this post:
    1) It should be optional, and opt-in. A lot of players expect unbiased shuffling and should not be forced to use anything else.
    2) It should be efficient. No point in implementing a mechanic whose effects are barely seen.
    3) It should be easy to learn. It will necessarily be more complicated to evaluate the probability of obtaining a card in the next draw than simply counting "outs" (which only works with unbiased shuffling), but it should never the less be as simple as possible for the player.
    4) It should be computationally inexpensive.
    5) It should be easy to implement.

    So i came up with the following shuffling algorithm, depending on a card affinity function, and on an integer parameter b in the range 0 to 100:

    + Shuffle the deck in an unbiased way, for instance using the Fisher–Yates shuffle algorithm.
    + Repeat b times: choose two cards in the deck randomly (equiprobably), and compute the sum of their affinities with the cards immediately above and below them. Swap those cards, and compute the new sum of their affinities with the cards immediately above and below them. If that new sum is lower that the previous, swap the cards again (putting them back where they were).
    + Zophie's step (optional): cut the resulting deck (cut position determined by the PRNG)

    For the purposes of this algorithm, we consider that the card above the top card of the deck is the bottom card, and the card below the bottom card is the top card.

    The affinity mentioned earlier between two adjacent cards c1 and c2 (c1 being above c2 in the deck) is defined as follows:
    + If c1 and c2 are both resources, their affinity is 0 if they can both be used to provide a common threshold, and 10 otherwise.
    + If c1 and c2 are both non-resources, their affinity the absolute value of their cost difference, plus 10 if they require no common thresholds to be played
    + If c1 is a resource but not c2, their affinity is 200 if c1 provides a threshold c2 requires, and 100 otherwise
    + If c2 is a resource but not c1, their affinity is 50 if c2 provides a threshold c1 requires, and 100 otherwise

    The above-described shuffling algorithm is indeed optional: if the player takes b=0, it is equivalent to an unbiased shuffle. It is relatively easy to learn, since the only extra parameter the player needs to consider when estimating the outcome of the next draw is the card previously drawn (its thresholds, type, and cost). It only requires at most 261 random numbers generated for a 60 card deck, which is reasonable. And it is straightforward to implement.

    I've benchmarked the shuffler efficiency by using the former human starter deck, with one minor difference: Lord Benjamin was replaced by a fourth Phoenix Guard Scoot (since Lord Benjamin is the only card of the deck that may allow drawing extra cards, excluding him simplifies things). Please consider the following sample starting hands for unbiased and maximally biased shuffling, as well as some more statistically significant data, obtained by simulating 100000 shuffles for each value of b in [0, 10, 20, ..., 100]:






    As 4 games out of 5 are screw free (thank Kismet), the sample starting hands look similar for both shuffling methods. Normally, you should have felt like mulliganing a few of the hands given by the unbiased shuffler while hopefully being content with most of those produced by the biased shuffler. The data plots allow to see more reliably that proportion of games affected by bias is significantly reduced. The proposed shuffling algorithm primarily avoids the situations where a player cannot play cards in their hand because of the lack of resources, and only to a lesser extent helps with the thresholds. Doing otherwise might put specific decks at advantage (for instance, 5-shard necrotics).

    The reduction in screw scenarios is significant. For instance, when using the unbiased shuffler and forbidding mulligan, in about 1.6% of the games the player cannot play any non-resource card during the 5 first turns (a situation which is dangerously close to the infamous input peripheral breaking threshold). When using biased shuffling with b=100 and still forbidding mulligan, this occurs in only about 0.2% of the games. That's eight times less.

    Practically speaking, the player should choose the value of b for each deck (for instance, by allowing to change the value of b in the window used to choose the deck sleeve). In PvE, the AI should use the same value of b as the player to prevent unfair advantage. In PvP, the value of b for both players should be the smallest of the values chosen by the players (this way, players opposed to biased shuffling are feel no impact of the biased shuffling mechanic at all).

    Thanks for reading this far. Any comments are welcome.

  2. #2
    I think it's too complicated for the gain.

    1.6% with mulligan forbidden is probably worth not having to explain this and the meta strategy behind choosing your b value to every new player.

  3. #3
    The Transcended
    Join Date
    Jun 2013
    Location
    California
    Posts
    7,860
    If you have not already, I encourage you to look at this thread:
    http://forums.cryptozoic.com/showthr...light=gaussian

    Your idea is also an interesting way to go about it. I didn't do a computational complexity check on mine. In any case, despite me perhaps favoring such a change, I highly doubt Hex will ever adopt anything other than a uniform shuffler.

  4. #4
    I love it when people backup suggestions with data like this.

    Kudos to the work
    ---------

    I’d like to apologize for the above statement. I commented in a very juvenile and offensive way that rightfully angered some posters. I regret it and sincerely apologize.

    ----------

    Pro + Grand King Backer. My Trade/Sell List. You can also browse my collection. Technically everything is for sale for a price.

  5. #5
    Thanks to you three for your feedback.

    Quote Originally Posted by Turtlewing View Post
    I think it's too complicated for the gain.

    1.6% with mulligan forbidden is probably worth not having to explain this and the meta strategy behind choosing your b value to every new player.
    Well, the 1.6% figure is a specific example of extreme screw. If, for instance, we consider that all starting hands with 0, 1, 5, 6, or 7 resources are bad, we get much higher occurrence rates:
    + 22.2% of bad starting hands with the unbiased shuffler (almost one game out of four!)
    + 4.0% of bad starting hands with my shuffler (with b=100)
    + 8.5% of bad starting hands with Yoss' gaussian shuffler (according to this post, could probably be improved by decreasing sigma)

    You are obviously right that such a shuffler would be complicated to explain to new players, though.

    Quote Originally Posted by Yoss View Post
    If you have not already, I encourage you to look at this thread:
    http://forums.cryptozoic.com/showthr...light=gaussian

    Your idea is also an interesting way to go about it. I didn't do a computational complexity check on mine.
    I was not aware of your work (I've been playing Hex, and reading these forums, for less that a year). I think your approach is good as well! Perhaps slightly less efficient, since it does not analyze the costs and threshold requirements of cards, but the fact that it works without taking into consideration the semantics of the cards can be seen an advantage in and of itself. It requires less random numbers per shuffle than mine, but its overall computational complexity seems to be O(n^2), where n is the deck size. (Mine is O(n+b), btw.)

    I will probably try implementing your gaussian shuffle in my script, so as to compare its results with my algorithm.

    Quote Originally Posted by Yoss View Post
    In any case, despite me perhaps favoring such a change, I highly doubt Hex will ever adopt anything other than a uniform shuffler.
    Yeah, I'm not getting my hopes up either. But making a suggestion never hurts.
    Last edited by Xylo; 02-11-2016 at 02:42 PM.

  6. #6
    Quote Originally Posted by Xylo View Post
    Thanks to you three for your feedback.


    Well, the 1.6% figure is a specific example of extreme screw. If, for instance, we consider that all starting hands with 0, 1, 5, 6, or 7 resources are bad, we get much higher occurrence rates:
    + 22.2% of bad starting hands with the unbiased shuffler (almost one game out of four!)
    + 4.0% of bad starting hands with my shuffler (with b=100)
    + 8.5% of bad starting hands with Yoss' gaussian shuffler (according to this post, could probably be improved by decreasing sigma)

    You are obviously right that such a shuffler would be complicated to explain to new players, though.

    .
    What is it with the existing mulligan rule though? And how does that compare to giving a free mulligan on 0 or 7 shards?
    There are vastly simpler soltuions to the problem of "I feel like I get too manny bad hands" IMO, and ultimately bad hands are a desirable part of TCGs (the point of shuffling the cards at all is so sometimes you get better hands/draws than other times and you need to design decks that can deal with that)

    The main issue I see with this system is that you chose the selectable b value to make the system optional. But it doesn't It means you need to understand the meta of what b to choose to benefit your deck and screw over your opponent's. And the actual shuffling logic is way too complicated for something that a knew player has to learn to 'break into' hex.

    Ditch the selectable b value (se a fixed value instead), and it'd rate about equal to Yoss's shuffler in my opinion which is I think it's overly complicated and a solution to things I don't consider problems, but it's probably not going to add any problems either so it's juts pointless not actually damaging.

  7. #7
    The Transcended
    Join Date
    Jun 2013
    Location
    California
    Posts
    7,860
    I agree that a fixed b value (set by the designers to be "optimal") would be best.

    EDIT:
    As a development exercise (not part of your final algorithm since it would slow you down), you should be able to define a total affinity function for the deck, which would give you a metric of the deck's "playability". For example, you could start at zero then go through each card and add its affinity with the card below it (N-1 calculations). You could then plot that affinity metric after each of your "b" loops, with b set to a very large number. Looking at that plot of playability versus optimization iterations, you should see asymptotic behavior and could pick a point that gives you something like 80% optimization versus that asymptote. You would then use the corresponding number of optimizations as your value for b henceforth.
    Last edited by Yoss; 02-11-2016 at 06:26 PM.

  8. #8
    I definitely need this. At least in PvE mode.

  9. #9
    Thanks for the post.
    I can't decide if I'm for or against it, but it is certainly interesting when you put in so much effort into explaining how it works, and showing the effects it would have on the game through detailed simulations and results.

    I agree that the meta-game around players picking values of 'b' would be more intimidating than it is worth (net negative, especially for the new players this is aimed to help).
    If the approach were used, I think it would be better to have a consistent fixed approach.

  10. #10
    +1 for this. Most people gonna stick to 1/2 shard decks if we maintain the same "Pure random" algorithm now.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •