Gabriel Putnam

Gabriel Christopher Putnam

 Email: gabriel putnam at gmail
 Google Scholar: G. C. Putnam
 Wikipedia: Araesmojo
 Stack Overflow: G. Putnam
 Hacker News: araes
 Github: araesmojo-eng
 
 
💻
Software
🔢
Math / Stats
🚀
NASA
🤝
Volunteer
📜
Publications
 
 
  • This is the second post in a series on the statistics of the Star Trek shows of the 90's and early 2000's. Originally created back in 2015, this was a follow on to a post covering general Star Trek summaries and specifics about The Next Generation, while this covers data on the show DS9. In this update, I have covered most of the ways I analyzed TNG, and then have updated my methods with some new ways of exploring the DS9 history.
  • Original post on Reddit
  • Original post on Imgur
  • Follow on to the post "Imgur: The Statistics of Star Trek - Ratings and Reactions" (also 🚀archived on this site).
🌌 Deep Space Nine, Ratings (Nielson) with StDev, All Seasons
  
🕵🏻
G. C. Putnam
🛈
This first visualization is the mean Nielsen ratings for show by season with Stdev bands, as I wanted to look at how ratings performed over time and whether there was much variation (like a ringdown). Oddly, what I found was that the show's ratings bleed was almost perfectly linear (which was kind of odd) except for a sharp drop between episodes 1 (TNG scale 19) and episode 2 (typical 13 heading downward). Makes me suspect that a large portion of the potential audience simply saw the setting, made a first impression judgement, and left, and then the show bled slowly for the rest of its life.
2015
 
🌌 Deep Space Nine, User Ratings (IMDB, TV.com) with StDev, All Seasons
  
🕵🏻
G. C. Putnam
🛈
Almost perfectly counter to the Nielsen ratings for the show, the viewer quality ratings of the show almost linearly improved with time (S3 and S6 were big scatter seasons). Season 4 would probably be considered a highlight of the show, as it was significantly above the linear trend in quality. As you can note later, its curve in quality was smooth, and also curved upward which gives the impression of a heroic cycle during the season. One theory for the overall trend might be that the show concentrated its viewer base to only the diehard viewers, and then focused closer in on what those viewers wanted.
2015
 
🌌 Deep Space Nine, Nielson vs Viewer Rating Correlation, All Seasons
  
🕵🏻
G. C. Putnam
🛈
For this chart, I have graphed the Nielsen Rating (NR) vs the Viewer Rating Avg. (VR) for all seasons in red and plotted how it trends. Up to the right mean large VR correlates well with large NR. In the second column, I have instead plotted the change in NR (the Delta) from episode to episode ([Ep i NR] - Ep i-1 NR] vs the VR. In some cases this made a difference as a rating appeared to produce a large change in NR even though the shift was hidden when viewed as whole numbers. Noisy seasons (like season 3) were helped a lot by this effect. Finally, in the third column, I also looked at the Delta with a delay of 1 episode, to see if there was a time delay effect on ratings with any correlation. Fairly bad.
2015
 
🌌 Deep Space Nine, Ratings (Nielson, IMDB, TV.com), Season 1-7
  
🕵🏻
G. C. Putnam
🛈
Zoomed in view of user ratings for Star Trek: Deep Space Nine, Season 1 on TV.com and IMDB along with Nielson ratings. Note the previously mentioned sharp drop between episodes 1 (TNG scale 19) and episode 2 (typical 13 heading downward), with ratings simply decaying linearly afterward.
2015
 
🌌 Deep Space Nine, Ratings vs References
  
🕵🏻
G. C. Putnam
🛈
A slightly more complex version of the reference vs rates chart that I used for TNG. Again, I am showing how the various characters and settings appear with respect to both time and quality and then identifying geometric groups among the set.
2015
 
🌌 Deep Space Nine, Main Cast, References / Season, Average Appearance User Rating / Deviation, Scatter and Trends
  
🕵🏻
G. C. Putnam
🛈
These plots graph the main cast appearances, average rating per cast member, and individual episode ratings with scatter and trends. Episode summaries on Wikipedia, IMDB, and TV.com were searched for a given keyword, in this case, the names of the main cast. The first chart is the number of episodes which directly reference a main cast character in the synopsis on wikipedia or TV.com and then showed how they trend over the seasons of the show. For the second chart, all episodes associated with those words had their ratings averaged, and then that rating is plotted here with the standard deviation up / down.
🛈
The third plot is a new type of charts I'm trying based on some papers on statistical processing that I read in the interim. In this version, I am plotting the individual episode ratings for each character so that the visual blob of their progress over time can be seen and then explicitly finding a trajectory for their time on the show. I started exploring the this type of chart specifically because of issues with this type of visualization. Although it looks like all the characters are very similar and have only minor differences, when we show the cloud of their actual ratings and the trajectories they took.
2015
 
🌌 Deep Space Nine, Main Cast, Individual Nielson Ratings by Season with StDev
  
🕵🏻
G. C. Putnam
🛈
Another option I'm trying this round is showing how the evolution of characters with large numbers of appearances changed over time. For example, Julian shows an amazing amount of growth over the show, moving from the least to the most favorite character, in terms of average character specific episode ratings.
2015
 
🌌 Deep Space Nine, Support Cast, References / Season, Average Appearance User Rating / Deviation, Scatter and Trends
  
🕵🏻
G. C. Putnam
🛈
These plots graph the supporting cast appearances, average rating per cast member, and individual episode ratings with scatter and trends. Episode summaries on Wikipedia, IMDB, and TV.com were searched for a given keyword, in this case, the names of the supporting cast. The first chart is the number of episodes which directly reference a supporting cast character in the synopsis on wikipedia or TV.com and then showed how they trend over the seasons of the show. For the second chart, all episodes associated with those words had their ratings averaged, and then that rating is plotted here with the standard deviation up / down.
🛈
For the third plot, the individual episode ratings for each supporting character, so that the visual blob of their progress over time can be seen and then explicitly finding a trajectory for their time on the show.
2015
 
🌌 Deep Space Nine, Guest Cast, References / Season and Average Appearance User Rating with Deviations
  
🕵🏻
G. C. Putnam
🛈
These plots graph the guest cast appearances and average rating per cast member. Episode summaries on Wikipedia, IMDB, and TV.com were searched for a given keyword, in this case, the names of the guest cast. The first chart is the number of episodes which directly reference a guest cast character in the synopsis on wikipedia or TV.com and then showed how they trend over the seasons of the show. For the second chart, all episodes associated with those words had their ratings averaged, and then that rating is plotted here with the standard deviation up / down.
2015
 
🌌 Deep Space Nine, Setting or Antagonist, References / Season, Average Appearance User Rating / Deviation, Scatter and Trends
  
🕵🏻
G. C. Putnam
🛈
These plots graph the setting or antagonist appearances, average rating per cast member, and individual episode ratings with scatter and trends. Episode summaries on Wikipedia, IMDB, and TV.com were searched for a given keyword, in this case, the names of the setting or antagonist. The first chart is the number of episodes which directly reference a setting or antagonist character in the synopsis on wikipedia or TV.com and then showed how they trend over the seasons of the show. For the second chart, all episodes associated with those words had their ratings averaged, and then that rating is plotted here with the standard deviation up / down.
🛈
For the third plot, the individual episode ratings for each setting or antagonist, so that the visual blob of their progress over time can be seen and then explicitly finding a trajectory for their time on the show.
2015
 
🌌 Deep Space Nine, Directors, References / Season and Average Appearance User Rating with Deviations
  
🕵🏻
G. C. Putnam
🛈
These plots graph the director participation (2+ episodes minimum) and average rating per show they worked on. Episode summaries on Wikipedia, IMDB, and TV.com were searched for a given keyword, in this case, the names of the director. The first chart is the number of episodes which directly reference a director in the synopsis on Wikipedia or TV.com and then showed how they trend over the seasons of the show. For the second chart, all episodes associated with those words had their ratings averaged, and then that rating is plotted here with the standard deviation up / down. I had no idea that Levar Burton was such a prominent director on this show. Unfortunately, of the heavy hitters, he's not one of the best. That distinction probably goes to Allan Kroecker and James Conway (much improved, as he was an OK director on TNG). Also, Michael Dorn proved to be another character actor with a good eye for behind the camera (after 300+ eps you'd think he might)
2015
 
🌌 Deep Space Nine, Teleplay Writers, References / Season and Average Appearance User Rating with Deviations
  
🕵🏻
G. C. Putnam
🛈
These plots graph the teleplay writer participation (2+ episodes minimum) and average rating per show they worked on. Episode summaries on Wikipedia, IMDB, and TV.com were searched for a given keyword, in this case, the names of the director. The first chart is the number of episodes which directly reference a director in the synopsis on Wikipedia or TV.com and then showed how they trend over the seasons of the show. For the second chart, all episodes associated with those words had their ratings averaged, and then that rating is plotted here with the standard deviation up / down. Within the teleplay writers, this show was heavily dominated by about five people, the kings of TNG writing, and particularly by Ira Steven Behr. The well rated Hans Beimler moved up in # cnt as well, and Peter Allan Fields performed much better on DS9 than on TNG.
2015
 
🌌 Deep Space Nine, Story Writers, References / Season and Average Appearance User Rating with Deviations
  
🕵🏻
G. C. Putnam
🛈
These plots graph the teleplay writer participation (2+ episodes minimum) and average rating per show they worked on. Episode summaries on Wikipedia, IMDB, and TV.com were searched for a given keyword, in this case, the names of the director. The first chart is the number of episodes which directly reference a director in the synopsis on Wikipedia or TV.com and then showed how they trend over the seasons of the show. For the second chart, all episodes associated with those words had their ratings averaged, and then that rating is plotted here with the standard deviation up / down. Compared to TNG, this show was MUCH more dominated by a few guiding voices, with less of what we might call RAND() or MEOW (MonstEr/challenge Of the Week) episodes. Apparently Rick Berman and Jeri Taylor are REALLY consistent at making good stories, but they were not nearly so hot at turning those into functional teleplays.
2015