Support our Sponsors:

Go Back   Touch Arcade > Developer Discussion > Public Game Developers Forum

Reply
 
Thread Tools Display Modes
  #21  
Old 09-28-2014, 12:16 PM
Pixelosis Pixelosis is offline
Senior Member
 
Join Date: Jan 2013
Posts: 114
Default

Quote:
Originally Posted by WalterM View Post
The manipulation at the level where the ranking, App Store placement, search result ranking, third-party app rankings and so on create a self-sustaining, high-download, high ranking app, seem aimed more at affecting those metrics (up or down) exploiting the App Store design than influencing individual users downloading the app. Users don't seem to care all that much about 4 or 4.5 or 5 stars or the appearance of some hit reviews. The rest is just meaningless gaming amongst developers and cause of much unnecessary angst.

From what we have been able to understand from our studies, success in those metrics are a function of the velocity and direction of downloads/ratings/revenues for an app than the absolute numbers.
That's a safe bet; the store would measure how alive your app is and an overal stagnation of ratings would be treated as a penalty against the apps that get a constant stream of fresh ratings.
That's almost a given considering Apple needs to come up with dynamic charts. It could not just count the ratings' values without looking at when they had been posted.

Quote:
Because of the number of apps and the clustering of apps in the combined metrics in those parameters, small changes seem to make a huge difference at that self-sustaining level.
I don't understand how small changes would make said difference. The self-sustaining level supposes that the app attracts new gamers almost regardless of ratings, reviews or else.
It cannot be both at a self-sustaining state and suffering of small scale, irrelevant statistical alterations.

Quote:
What disturbs us the most in what we have observed and why we have started on this to precipitate a review of the process by Apple is the suspicion that the App Store may itself be compromised from the inside (intentionally or not) and not just external bot activity. There are no defensive measures one can take against that.
One rogue element wrecking hell from within a large company to his/her own benefits wouldn't be a first, but that's a grave accusation nonetheless.

Quote:
For example, our ratings count is frozen since the 24th. Things like this do not happen because of events from the outside. This has happened before in our previous update.
This would mean it already happened before October 2013. Logs, maybe?
Reply With Quote
  #22  
Old 09-28-2014, 12:18 PM
Pixelosis Pixelosis is offline
Senior Member
 
Join Date: Jan 2013
Posts: 114
Default

There are certain things that need to be pointed out, and most of them will be done thanks to App Annie's statistical service.

First of all, I observed the performance of your app in the following countries: United Stated, UK, New Zealand, Australia, Canada, Russia and Germany.

In the US, one of the hardest yet most lucrative markets to crack, your app has been hovering in the top 50 apps in the Word category since Jan 1st. In Australia, the app also gets a good position in the same category, often within the first 100.
Over its entire life, the app has been doing rather well in those countries. It slightly started to go down, but nothing really massive, and after the 3.1 update in October 2013, your app actually started to go upwards.
More importantly, we can see that your app's ranking has abolutely not suffered at all. It's even doing better, and has for example reached its highest spot of #21 during September 11th and 12th for this year.

In other countries, your app ranks lower but we don't see a drop following May 21st or 24th.
So basically, even if the rating net gain was of zero, we can see that it has zero influence at all on your app. In fact, from this alone, we might infer that review removal might have zero relevance at all and at best play a small part in Apple's ranking algorithms.

Also, App Annie reproduces the list of all ratings and reviews, per country. For September thus far, you've gotten more than 15 reviews with ratings in the US alone.

If the removal is done interally, whether by a rogue element at Apple, or one of their routines, why haven't you created a few test accounts and given yourself a couple three stars ratings, waited a few hours or days, and checked if their ratings were removed or not?



Mind you, if one wanted to hurt an app, I guess it could be possible to script a process that does the same thing as posting multiple ratings, but in small increments: it would initially help your app a bit (you wouldn't gain that much ratings), but all of sudden, all accounts would be ordered to remove their ratings at once at a key time.
That would surely provoke a swelling of your ranking. That, however, would be far fetched and highly unefficient since the slow drip would happen over the course of several months, enough to actually allow the app to rise and become successful and easily tank a sharp loss in their overall quantity of ratings.



Also, how can we be sure that you haven't tried to cheat Apple's App Store and that you got caught, or that you're not tried to generate some kind of buzz?
Reply With Quote
  #23  
Old 09-28-2014, 03:59 PM
Jez Hammond's Avatar
Jez Hammond Jez Hammond is offline
Developer
iPad, iOS 5.x
 
Join Date: Oct 2012
Location: UK
Posts: 43
Default

Quote:
Originally Posted by WalterM View Post
This is off on a tangent from this thread and do not want to turn it into an App Store discussion thread but just to explain why we are pursuing this and unhappy about what we have found ...
I have a couple more questions which would be nice to eliminate. I hope that isn't off topic as the conversation looks quite deep right now!

Consider the user is asked to rate, selects yes, AppStore loads up, user can now do many things which might result in not actually posting/achieving a rating. Examples might be: no password / pursued alternative AppStore usage / immediately pressed the home button thinking they have 'tricked' your game to stop nagging / received a phone call.

Also, uninstalling a game offers removal of Game Center; does uninstalling also remove reviews? - even if not immediately as they certainly wouldn't appear immediately... (no device to test/confirm atm)

And here's the thing: Does your statistic generator take this in to consideration at all, and if so how could it possibly know if anyone had completed the proposed action (rate this app), uninstalled, or just did nothing except for playing again next time? It does sound like your Flurry(?) type gathering of actions is presuming completion. You might need to incorporate your test in to a third-party app and see if similar results turn up or not. Though I might be misunderstanding how you count the differences and such data could be freely available for other apps already? - in which case my questions could be eliminated.

I can confirm that reporting a broken AppStore apparently achieves a function-key-selected-auto-message. I once went ahead and coded an update to fix a black icon on the store as nobody seemed to care to even acknowledge it was happening! A week later v1.0.1 appeared (went from charting to not charting). Now that was years back and I am done trying to find out what happened, so good luck with this issue because nobody cares about the small guy. Oh, that started out humble but ended up reminding me of a nightmare, sorry about that.
Reply With Quote
  #24  
Old 09-28-2014, 09:06 PM
WalterM WalterM is offline
Junior Member
iPad (4th Gen), iOS 7.x
 
Join Date: Aug 2014
Posts: 11
Default

Pixelosis, thanks for continuing to have a critical eye towards this thread and raising some interesting questions that provides me the opportunity to clarify. This kind of a scrutiny of the App Store goings on is very good for the developer community as a whole.

In this particular post I will address the meta-discussion on the discussion type of comments since it may not be of interest to many readers and cover the discussion on the data and its interpretation in a separate post because of the length constraints in the forum.

Quote:
One rogue element wrecking hell from within a large company to his/her own benefits wouldn't be a first, but that's a grave accusation nonetheless.

You certainly have at least a theory and a grave accusation formulated.
I have to address this first.

I would appreciate not mis-characterizing my statements in this manner because there are legal implications of such. I have not made any accusations grave or otherwise. On the contrary, I have explicitly stated that it could happen for any number of speculative reasons but that it is not knowable for us but Apple can determine it easily and therefore NOT worth going into here.

If it is obviously interpreted as an accusation by an average reader, then I apologize.

We can go back and forth on this forever for argument's sake but I will stop here on this aspect in the interests of progressing the study of the data.

Quote:
The motives, the 'why' precisely, are what make the difference. Are they good or bad?
Good or bad helps us guess where it comes from. It helps us guess if one would try to fly under the radar or wouldn't care. It'd help us guess what would happen next (best thing for a theory is to actually provide a "forecast" of behaviour or phenomenon).
I hear what you are saying and it is a valid point. Problem is such speculation may not lead to anything meaningful because such guesses just aren't provable or knowable (unless the data itself provides a proof of it or explains it well). It makes straw arguments like the following possible, "It is because X might be doing this but X has no reason to do this and therefore the data is not an anomaly". I am sure you see the fallacy and danger in such arguments.

The main motivation is to arrive at observations based on the data that provide meaningful support that it cannot be from a normal legit user activity (because of patterns, probability, access to ratings before they are published, etc) that should be sufficient to prompt Apple to investigate.

If you believe that a good theory of why and how leads to an explanatory behavior of the data, please feel free to post the theory and why it is necessary for such a theory to explain the data and that would certainly help but without that connection, it would be a detraction and easily dismissed.

Quote:
Requiring an algorithm that would precisely mimick the process you observe is overkill. A simple set of solid statistics would do.
It would also rather be easy for Apple to know if the reviews have been removed by peers, or if they've been removed by a moderating function activated from within Apple.
Agreed. Would welcome any such observation/thought that would prompt/help Apple to do such an investigation.

Quote:
If the removal is done interally, whether by a rogue element at Apple, or one of their routines, why haven't you created a few test accounts and given yourself a couple three stars ratings, waited a few hours or days, and checked if their ratings were removed or not?
Not saying here whether we have or we have not within what would never be construed as illegal or against Apple policies (don't want to run that risk). Anyone could check this on their own too.

Quote:
Mind you, if one wanted to hurt an app, I guess it could be possible to script a process that does the same thing as posting multiple ratings, but in small increments: it would initially help your app a bit (you wouldn't gain that much ratings), but all of sudden, all accounts would be ordered to remove their ratings at once at a key time.
That would surely provoke a swelling of your ranking. That, however, would be far fetched and highly unefficient since the slow drip would happen over the course of several months, enough to actually allow the app to rise and become successful and easily tank a sharp loss in their overall quantity of ratings.
You are correct but this is not what is happening as indicated by the data. As I mentioned, there are no net removals except on May 21 which is likely connected to the published event when Apple cleaned up. We don't believe that it is related to this activity at all. If that was part of the activity, we would have seen more net removals either before or after and we have not seen them.

This is what I mean by coming up with speculations that are consistent with, supported by or having an explanation for observed data. Otherwise, it is easy to propose any number of such theories that can be easily shot down.

Quote:
Also, how can we be sure that you haven't tried to cheat Apple's App Store and that you got caught, or that you're not tried to generate some kind of buzz?
Great question!

How do we know you aren't behind or vested in this activity and trying to obfuscate it with tangential and strawman arguments that remove the focus on what is going on in the data or calling our credibility into question? :-)

The reason for both is the same - it would be counterproductive to such a motive.

If we had dome something to be caught by Apple, we would have received a friendly letter by now especially after we have been trying to provoke this public interest which would be kind of dumb to do if we had anything to hide or anything that could let Apple boot us out of the App Store.

In the same way, your participation keeping this thread active in the forum is raising more interest and keeping it alive than just letting it get buried so it would be counterproductive to a motivation of making this "go away". :-)
Reply With Quote
  #25  
Old 09-28-2014, 09:26 PM
WalterM WalterM is offline
Junior Member
iPad (4th Gen), iOS 7.x
 
Join Date: Aug 2014
Posts: 11
Default

Continuing with discussion on the analysis of the data itself

Quote:
Originally Posted by Pixelosis View Post
One single account doing all the rating posting and removal would stick out like a sore thumb. You wouldn't need any algorithm here.
On the contrary. The first reaction from people looking at our logs was NOT that it was a single account. Second, such an activity would not be obvious even to us if we hadn't scraped every hour unlike app aggregation companies that sweep once a day. The algorithm may not have anticipated that and only depended on end of day results/consequences OR as you suggested what could be assumed as the normal Apple monitoring activities. All shady activities have a weak point obvious in retrospect and reason why they are caught!

Quote:
So as far as it goes, if I read you correctly, your theory is that the goal of this process is to make ratings stagnate and that it relies on some obfuscation so it doesn't get spotted.
Obviously, it's not the act of removing ratings that would obfuscate anything, but how they are removed.
The removals look like they target scores at random (instead of going only for the 4* or 5*) and appear more natural, plus something so small and constant that it doesn't shine like a thunder bolt in the dark.
Yes, roughly. Our report will make it much more clear and bring out some nuances but we are still analyzing a bit more and will present it to Apple first before posting it here. Meanwhile, I am crowd-sourcing this analysis here which has helped immensely from tips received. There are no net removals of ratings in our theory since that is not consistent with the data overall.

Quote:
Thing is, if this process can be intensified in frequency and keep the same magnitude per action, it could literally turbo-grind the ratings of apps that usually get larger quantities of ratings per day.
Not sure I understand your thoughts in the above. Can you please explain independent of the other points in its own post? Thanks.

Quote:
I don't understand how small changes would make said difference. The self-sustaining level supposes that the app attracts new gamers almost regardless of ratings, reviews or else.
It cannot be both at a self-sustaining state and suffering of small scale, irrelevant statistical alterations.
The self-sustaining state is a crowded space (and often a fixed pie) and one can be knocked off of it easily (not necessarily because of manipulation). An analogy I can think of (to be taken as a very rough analogy) is soaring in a thermal. In a self-sustained mode in the App Store, the app is like a glider or a bird circling in a thermal and gets a lift without actually having to do anything. But it is also easy to fall off of a thermal with some small deviations.

This can happen naturally with change in demographics, competition, app updates, change in user behavior or potentially by manipulation if you are close to the "edge of the thermal". Once off the "thermal", apps tend to fall off rapidly depending on their "glide ratio". Don't wish to go too tangentially on this aspect but that is what I meant by small changes can make large differences.

In theory, one can envision a manipulating activity designed to knock an app "off of the thermal" with just enough changes and let the natural "sink" qualities of the App Store design take care of the rest. So, it would only be needed for a small period (unless the app developer is doing things to counteract that). Such an algorithm is consistent with what we have seen in our download numbers and the ratings since early December (before the start of the holiday season). The performance of an app is not liner in its rankings but almost exponential after the first 10 or so in each category.

Quote:
There are certain things that need to be pointed out, and most of them will be done thanks to App Annie's statistical service.
Thanks for looking up the App Annie statistics. That is indeed one of the sources we have used along with our download statistics that only we have access to.

Will just point out that each country behavior is different and for an app like ours that caters to US-centric type of crosswords (rest of the world has different kinds of crosswords and they don't appreciate the Americanism in US Crosswords much). So, nothing can be inferred from other countries whose traffic is in the noise compared to the US and typically from US ExPats. The rankings work independently in each country.

Quote:
If the removal is done interally, whether by a rogue element at Apple, or one of their routines, why haven't you created a few test accounts and given yourself a couple three stars ratings, waited a few hours or days, and checked if their ratings were removed or not?
Not saying here whether we have or we have not within what would never be construed as illegal or against Apple policies (don't want to run that risk). Anyone could check this on their own too.

I also need to clarify something that may have created confusion

Quote:
For example, our ratings count is frozen since the 24th. Things like this do not happen because of events from the outside. This has happened before in our previous update.
I meant Sept 24th, 2014 (last week) above. And I meant the number of user star ratings reported (sometimes, ratings and rankings are confused for each other). With rating, I only mean the reported user * star ratings in the App Store which is what we are measuring.

One of the motivations for us to post this in public is to see if it would have an effect on the rating data observed. And we may have succeeded.

The freezing I mentioned is in the same logs I linked for activity. There have been no new ratings (read ZERO) added to our app since Sep 24 despite averaging about 2.5 ratings a day since May 21 and even more before then for the current update along with absolutely no +1, -1 flipping.

Reasonable people will agree that it is highly improbable that this is explained by normal user behavior unless you are willing to postulate that all users decided to stop rating in unison on Sept 24 and the +1, -1 activity also stopped exactly about the same time.

There are no significant changes in our download data before and after Sept 24, no significant difference in the number of people being taken to the App Store in our sampling log. And no, it is not iOS 8 use being different. From the logs most of our audience including new downloads (which tend to be more conservative and non-tech savvy than typical gamer audiences) have not switched to iOS 8 yet.

Possible theories are:
1. App store ratings are not working since Sept 24. Not true since it is working for other apps.
2. Apple became aware of this and suspended rating updates to our app while they investigate. This, we would welcome very much since that is the main intent.
3. IF there was an entity affecting our rating from inside, they shut off the algorithm after becoming aware of the public discussion but not restoring the normal processing or they took the long weekend off for the Jewish New year with new ratings backing up. :-)

I think most reasonable people would agree that there is something not quite right in what is going on even if they disagree with what and how. Apple can solve this mystery in a few minutes by investigating. We will continue to build our case until they do.
Reply With Quote
  #26  
Old 09-28-2014, 10:02 PM
WalterM WalterM is offline
Junior Member
iPad (4th Gen), iOS 7.x
 
Join Date: Aug 2014
Posts: 11
Default

Hi Jez, I will explain briefly as this instrumentation is not our creation but something we picked up a while ago in a forum or a blog or somewhere and the rationale made sense to us.

Quote:
Originally Posted by Jez Hammond View Post
Consider the user is asked to rate, selects yes, AppStore loads up, user can now do many things which might result in not actually posting/achieving a rating. Examples might be: no password / pursued alternative AppStore usage / immediately pressed the home button thinking they have 'tricked' your game to stop nagging / received a phone call.
This is correct, which is why you might never see 100% "conversion rate". Note that people can rate by going directly to App Store as well which will cancel out some of those, so the number of ratings averaged over a week or more will be some percentage of that log count. But the key statistical insight is that if there is a "conversion rate" of about X%, then this conversion rate will not change much over time UNLESS there is a significant design change in the App Store that affects user behavior. For example, if the URL for the app page suddenly changed or stopped working, you would see it almost immediately in your statistics as the "conversion rate" plunges.

If hypothetically, only 1 in 10 such people going to the App Store were leaving a rating the rest not doing so for one or more of the reasons you have listed, the same ratio would likely continue to do so with some small variation regardless of the number of people doing so (which would vary with the number of downloads - more people that download and use, more or likely to see and tap on that link). This is how statistical sampling techniques are designed.

As far as we know uninstalling an app does NOT remove the ratings or reviews. You can see this for yourself by rating an app, uninstalling it and then going to your account in iTunes on a Mac and looking at your list of reviews.

We also do not believe that deactivating an account removes the rating UNLESS Apple does some housekeeping occasionally. We believe this is what happened on May 21, 2014 related to some clean up of suspect accounts along with deactivated accounts.

To answer your question, the statistical sampling method does not assume anything about whether someone actually rates or not. But if you had a consistent 1 in 10 conversion over time, you would expect a certain amount of ratings for any number of such entries in the log. So for example, for 43 entries one would have expected 4.3 ratings averaged. When you suddenly see zero ratings, the log tells you that it is not because of reduced usage of your app or the lack of intent to rate the app but something else is going on. Without it, you have no visibility and anyone can claim that no one wants to rate your app. Everyone should do this for their app.

This rationale made sense to us and so we incorporated it. It also provides rich information about user behavior that you can use to help improve your app design. For example, how much time does it take to get to that stage? What fraction of people are reaching that stage (relative to downloads)? Can you change the design so that more people reach that stage if it means they are enjoying it more, or has a design change resulted in less people reaching that stage, etc.

In fact, we have had times when that fraction was greater than 1 because people can rate the app by going directly to the App Store before they get to see our link. That would not be logged. So they cancel out some or all of the people that never rated even after being taken to the App Store. But for a given app and demographics and no changes to the App Store design to affect "conversion rate", that fraction should remain the same, even if it is very different for different apps.

Quote:
I can confirm that reporting a broken AppStore apparently achieves a function-key-selected-auto-message. I once went ahead and coded an update to fix a black icon on the store as nobody seemed to care to even acknowledge it was happening! A week later v1.0.1 appeared (went from charting to not charting). Now that was years back and I am done trying to find out what happened, so good luck with this issue because nobody cares about the small guy. Oh, that started out humble but ended up reminding me of a nightmare, sorry about that.
Thanks for the anecdote and support. Very appreciated.
Reply With Quote
  #27  
Old 09-29-2014, 04:00 AM
Jez Hammond's Avatar
Jez Hammond Jez Hammond is offline
Developer
iPad, iOS 5.x
 
Join Date: Oct 2012
Location: UK
Posts: 43
Default

Quote:
Originally Posted by WalterM View Post
Hi Jez, I will explain briefly as this instrumentation is not our creation but something we picked up a while ago in a forum or a blog or somewhere and the rationale made sense to us.
Hi Walter, I am familiar with analytic software though the popular one which I used seemed to have the majority of actions 'not get through'. I think a more direct approach (hosting a server) would be far superior than a patchy free service (my experience is again a couple of years old fashioned).

Agreed there should be at least some conversion rate from a proposed rating. I can only imagine that what you are reading is (again) out of synchronisation. It's like a bubble-sort where the data has a multi-pass journey that first must be fulfilled before the result is valid (for whatever reason on the store, maybe extremes are considered differently than 2s and 4s). Better still, think of screen-tearing (when no V-sync is present) except on multiple levels: what you would see is (at least one) random sample(s) from at least two moments in time. So the same would apply to prematurely reading the results of a sorting algorithm. Now if you [rating provider] have a network of incoming changes then they will arrive in a random order (timing of sample location, latency, etc.) where no single sample will contain the full picture, it can only adjust the current 'big picture' which is approximately based on that particular sweep [and adjusted every 400 years plus every midnight in a European country house haha ; ]. Though I still find it to be extremely unlikely to find 10s of missing ratings over a day or two, over an hour though I think is quite plausible because at the end of the day your average is not diminishing at all. It would be futile for anyone to try and adjust an average either way, though I could speculate on exceptions I don't want to give any potential malicious types any ideas (I presume we all write here with this in mind, and hence continue to keep an open mind about the whole scene and even mankind).

Agreed that (normal usage) uninstalling is not what is going on here. Deactivating account would not be it either, unless big A did so with an array of suspect accounts. It would be in everyone's interest not to disclose success rate, else we could end up in a situation where 'the invisible men' survive by natural process! Better to play them at their own game and mostly conceal investigations imo.

I think it is safe to presume that nobody rates *iOS* apps in iTunes-Mac/PC except for people that hate touch-screen-keyboards! Yes the conversion rate would remain the same, which brings me to another consideration where some obscure app change could provoke a major change in rating habits - that might be something like "ask me later" timeout was adjusted and now falls out of the sweet spot. I've seen players change ratings on my game from 5* to 1* yet claiming they used to love it :/ it's like expecting common sense conversation from a parrot haha.

Glad to chip in on this, and happy to help discourage any malicious things in existence because they ruin people's lives big time and it's just plain unsportsmanlike if we lose our current level-playing-field that allows everyone a chance to publish their efforts. Keep up the investigating, I think people are curious to see a result either way - but most will not comment on such topics unless something has happened to them personally. This is life.
Reply With Quote
  #28  
Old Today, 09:29 PM
WalterM WalterM is offline
Junior Member
iPad (4th Gen), iOS 7.x
 
Join Date: Aug 2014
Posts: 11
Default

Leaving no doubt that there are things going on within the App Store that is affecting the app, the user ratings for the app completely froze (no ratings left during this week were added to the app ratings) last week for exactly a week and 29 ratings appeared in the same sweep this morning after all of them disappeared for a short period. Perhaps the algorithm/entity "managing" our app ratings took a vacation. :-)

Code:
Oct 1 08:30:01 |18 9 1 0 1 |4.6104
Oct 1 16:30:01 |0 0 1 0 -1 |4.61142
and the +1, -1 pattern has resumed as you can see above

Jez, regarding your comment about sync problems, the above is not a sync problem. What we are measuring is more like packet loss than packet delay and not affected by delays when measured and averaged over time.

Also regarding your comment on other developers not commenting, perhaps not publicly but we have been quite heartened by the private communications we have received that has helped us progress on this.

For example, a helpful and smart reader of a forum where we posted, imported the data to an Excel spreadsheet to visualize it that showed a better way to look at it than the raw numbers and combined with the tip from another that there may just be obfuscating behavior, allowed us to process the data that led to a breakthrough.

As a sneak preview, if we plot the average ratings over the lifetime of that log including all ratings registered, we get the following graph



The downtrend is visible but not much details of how exactly it happens.

However, if we filter out the ratings from the single account moving from rating to rating at almost every sweep with no net additions or removals, we get a much clearer picture like the following:



This allowed us to notice some very interesting repeating patterns (in the shaded areas) based on what appears to be a triggering activity and a good idea of how the ratings declined algorithmically (and clustered) during that time. The details of what this reveals will be the center-piece of our report.

Tips received from forum members have indicated a lot of suspicious patterns they have noticed for themselves before I posted this thread, not all of which we will be able to explore or verify on our own. But even allowing for some paranoid suspicions, there appears to be enough of a feeling amongst more than a few that what is happening in the App Store ratings is not strictly on the level both for apps doing well and apps who mysteriously lose downloads/ratings.

We are just not sure what is real and what is not real in the App Store any more!

Some of the more credible observations (which can be verified in a few instances) and unrelated to our logs we have received from readers include:

1. App rating totals that happen to go up in each sweep with such regularity (seems almost always multiples of 2) that it seems automatic. Reviews appear like a quota system with regularity (e.g., exactly 2 or exactly 3 reviews per day) often all in the same sweep. Apps go through this phase for a fixed period and then suddenly nothing even if nothing else has changed.

2. One liner generic reviews that appear with regularity for certain apps which appear to be automatically generated and very often connected to the quota in 1 above. Looking at the review history of the names associated with these reviews (in iTunes) show a pattern of reviewing apps with similar review quota in operation and these reviewing accounts all appear to have similar delays between reviews (typically many months). There is a strong possibility that many of these are all fake accounts and come from a large pool of accounts used to create those reviews. We are working on a script to crawl through these kinds of short reviews and form a graph of the apps related via such reviews (and their rankings) just out of curiosity.

3. Atypical rating activity clustered around Tue evening/Wed early mornings or month ends. Not sure what happens in the App Store on Wednesdays or at the beginning of the month. Coincidentally, the freeze for our app reported above also happened Wed-Wed.

It gets curiouser and curiouser...
Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Copyright 2012, TouchArcade.com, LLC.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
Copyright 2008 - 2011, TouchArcade.com. Privacy Policy / DMCA Copyright Agent