|
Post by 36cygnar24guy36 on Mar 22, 2018 13:00:23 GMT
...Do you really think they don't have an internal playtester department? You know, sometimes, when they release things like Una2 was released, or when initial rules for things like Makay, with 2 of the battletanks, that are clearly over the top. It makes me wonder how extensive the testing is. I mean honestly, Did they actually test Una2? how in the world did she come out that busted? The other thing could be, that in CiD, they specifically release super silly models, just to see how the community reacts. But as more CiD’s Happen, I feel this reason is less likeley. I remember the first iteration of Eilish and how bonkers he was, I would have had more respect for them if they just stated 'We want to sell magazines, so here is a crazy good model' rather than trying to justify his insane rules and inclusion in every faction, despite it making no sense.
|
|
|
Post by NephMakes on Mar 22, 2018 13:37:47 GMT
[...] when initial rules for things like Makay, with 2 of the battletanks, that are clearly over the top. It makes me wonder how extensive the testing is. [...] The other thing could be , that in CiD, they specifically release super silly models, just to see how the community reacts. But as more CiD’s Happen, I feel this reason is less likeley. In one of the recent dev hangouts they mentioned putting things into CID that they felt were probably too good, but wanted feedback first. Then their strategy was to tone it down a bit, put it back into CID, and see if there needed to be more. It's not a "release", it's testing.
|
|
gordo
Junior Strategist
My star is green?
Posts: 548
|
Post by gordo on Mar 22, 2018 13:51:05 GMT
[...] when initial rules for things like Makay, with 2 of the battletanks, that are clearly over the top. It makes me wonder how extensive the testing is. [...] The other thing could be , that in CiD, they specifically release super silly models, just to see how the community reacts. But as more CiD’s Happen, I feel this reason is less likeley. In one of the recent dev hangouts they mentioned putting things into CID that they felt were probably too good, but wanted feedback first. Then their strategy was to tone it down a bit, put it back into CID, and see if there needed to be more. It's not a "release", it's testing. They definitely do that and have said that. They did it in the Everblight CiD, specifically saying "we think this is too good, confirm our fears or prove to wrong"
|
|
|
Post by Gamingdevil on Mar 22, 2018 13:54:00 GMT
[...] when initial rules for things like Makay, with 2 of the battletanks, that are clearly over the top. It makes me wonder how extensive the testing is. [...] The other thing could be , that in CiD, they specifically release super silly models, just to see how the community reacts. But as more CiD’s Happen, I feel this reason is less likeley. In one of the recent dev hangouts they mentioned putting things into CID that they felt were probably too good, but wanted feedback first. Then their strategy was to tone it down a bit, put it back into CID, and see if there needed to be more. It's not a "release", it's testing. I can get on board with this. I can imagine they sometimes splurge on things that would be "cool" on a model and rather than toning it down before putting it in CID and risking to make the model lackluster and uninteresting, they choose to let the process do its work and tone it down gradually, with input from the community. Like the people participating can see all the options at once, but must choose only a couple to be on the final version. In other cases, there might be some discussion about whether or not a model is balanced with one of the devs thinking a model is too good while the other(s) might think that it's great, but not over the top. They put it in CID and see what the results are. They don't owe it to anyone to start the first day of CID with "balanced" models, then everyone would just complain that the models are uninspiring, but fine, so put everything in green and leave it at that.
|
|
|
Post by jisidro on Mar 22, 2018 14:02:00 GMT
I doubt Schick does a lot of playtesting... He is the Marketing director, right? But ok... 4 guys and some unspecified colaboration... Doesn't sound like a lot. ...Do you really think they don't have an internal playtester department? I do. If they did they would use them in cons or in the odd insider at the very least. Why wouldn't they?Good news! Proven wrong!
|
|
|
Post by NephMakes on Mar 22, 2018 14:41:26 GMT
...Do you really think they don't have an internal playtester department? I do. If they did they would use them in cons or in the odd insider at the very least. Why wouldn't they? This most recent Weekly Rumble on Twitch included a guy I'd never seen before, Jeff Olsen, who was described as "an internal playtester". At one point he said he was "literally a professional Warmachine player". That sounds like an internal playtester department to me. I imagine they used Olsen this week in part because a lot of the Wills have been out at cons. Also, being a public-facing person is a skill. You have to learn what's okay and not okay to do/say when you're a representative of your organization in a professional setting. You have to learn to speak clearly and audibly. You may have to learn about and rid yourself of habits that are distracting or inappropriate in that setting. Some people don't want to bother with all that, and some people love it. And I imagine there's some value in having a focused number of people be the public faces of a company. It makes it easier for your audience to learn and become familiar with your representatives when it's not a seemingly-endless stream of strangers. And it cuts down on the need to train everyone for something that may only be a minor part of their job.
|
|
|
Post by jisidro on Mar 22, 2018 15:49:51 GMT
Awesome!
He is very new in the company. He left another game related job in January 2018 according to the public info I found. Seems doing the public bit is part of the job?
Cool. Perhaps we'll get to know them as time goes by?
|
|
juckto
Junior Strategist
Posts: 124
|
Post by juckto on Mar 22, 2018 18:47:30 GMT
The other thing could be, that in CiD, they specifically release super silly models, just to see how the community reacts. But as more CiD’s Happen, I feel this reason is less likeley. I disagree with your conclusion. The more CIDs that roll out with super silly rules in week 1, the more it proves they're doing it on purpose.
|
|
Provengreil
Junior Strategist
Choir Kills: 12
Posts: 850
|
Post by Provengreil on Mar 22, 2018 18:53:42 GMT
The other thing could be, that in CiD, they specifically release super silly models, just to see how the community reacts. But as more CiD’s Happen, I feel this reason is less likeley. I disagree with your conclusion. The more CIDs that roll out with super silly rules in week 1, the more it proves they're doing it on purpose. This. Locke's feat on week 1 was so obviously broken that they functionally lost a week of testing on real feats. It's pretty bad form IMO.
|
|
Choco
Junior Strategist
Gorten, best feet in the game.
Posts: 571
|
Post by Choco on Mar 22, 2018 18:57:41 GMT
I disagree with your conclusion. The more CIDs that roll out with super silly rules in week 1, the more it proves they're doing it on purpose. This. Locke's feat on week 1 was so obviously broken that they functionally lost a week of testing on real feats. It's pretty bad form IMO. Yeah, it definitely seemed like they had a high concept idea for a feat that just fell apart real fast. Now, if they toned it down to maybe casting each spell once during the feat, maybe that could work. But it's gone and it is what it is.
|
|
Ganso
Junior Strategist
Posts: 932
|
Post by Ganso on Mar 22, 2018 22:47:13 GMT
This. Locke's feat on week 1 was so obviously broken that they functionally lost a week of testing on real feats. It's pretty bad form IMO. That's not how any empirical method works. You don't obviate anything, you don't take anything as Good or Bad. You take things as they are, and you run your tests, and you analyze your data and you reach your conclusions.
|
|
Provengreil
Junior Strategist
Choir Kills: 12
Posts: 850
|
Post by Provengreil on Mar 23, 2018 2:00:01 GMT
This. Locke's feat on week 1 was so obviously broken that they functionally lost a week of testing on real feats. It's pretty bad form IMO. That's not how any empirical method works. You don't obviate anything, you don't take anything as Good or Bad. You take things as they are, and you run your tests, and you analyze your data and you reach your conclusions. I don't need to run tests to know that a bowling ball dropped off a skyscraper will hit the ground with lethal force. I didn't need the battle reports to know that free, cortex targeting jackhammers whenever the opponent dared play the game was an issue either.
|
|
|
Post by mcdermott on Mar 23, 2018 2:04:53 GMT
That's not how any empirical method works. You don't obviate anything, you don't take anything as Good or Bad. You take things as they are, and you run your tests, and you analyze your data and you reach your conclusions. I don't need to run tests to know that a bowling ball dropped off a skyscraper will hit the ground with lethal force. I didn't need the battle reports to know that free, cortex targeting jackhammers whenever the opponent dared play the game was an issue either. Lazy tester, likely ignored in CID.
|
|
|
Post by josephkerr on Mar 23, 2018 2:15:20 GMT
That's not how any empirical method works. You don't obviate anything, you don't take anything as Good or Bad. You take things as they are, and you run your tests, and you analyze your data and you reach your conclusions. I don't need to run tests to know that a bowling ball dropped off a skyscraper will hit the ground with lethal force. I didn't need the battle reports to know that free, cortex targeting jackhammers whenever the opponent dared play the game was an issue either. To be fair, the only reason you know a bowling ball dropped from a skyscraper will hit the ground with lethal force is that someone has already done the testing for you. "Obvious" is pretty debatable.
|
|
|
Post by macdaddy on Mar 23, 2018 14:36:44 GMT
I agree with Provengreil here, sometimes you can use common sense to determine whether something is OP. We all have experience playing the game enough to know that certain interactions that are terrible on paper mean bad news for the tabletop. Not saying everyone is right all the time, but it was pretty obvious locke was super strong at the beginning of CiD. I do think it should be tested, even if just once to prove a point. But accusing someone of being lazy because they came to reasonably logical conclusion about scalping aspect/systems during your opponents turn whenever they did something to actively participate in the game. I remember a large number of people screaming battle bears were op without reasonable testing, and when you put them on the table they were too good for sure too good for their points. Same thing with smog belchers. Heck, most of the dev talks are more about random conversations about how things should work over battle report data.
|
|