Wednesday, October 16, 2013

Using an older Mac Mini as a DVR

I'm not a cable TV subscriber, but we occasionally watch shows on broadcast TV that come in over the antenna. Recently, I decided to enhance our setup with DVR functionality using an older Mac Mini (from 2009) that was already connected to the TV. Long story short, I went with the combination of an HDHomeRun tuner, MythTV 0.27, and a Schedules Direct subscription, and things are working great. Some notes on the process:
  • Web searches for "mac dvr" and related terms often lead to EyeTV products and software. I went with the HDHomeRun instead of EyeTV hardware since EyeTV doesn't seem to have anything available in the US that can pull signals directly from a coaxial cable (instead there are analog RCA inputs to get signals from a cable box). I skipped the EyeTV software since MythTV is free and I could deal with the painful setup process.
  • I set up MythTV following the instructions here and here. One issue I ran into was that the setup program would crash on opening for me until I addressed the issue with MySQL time zone tables. Otherwise, things went smoothly, though the process was quite tedious.
  • It took some tinkering to find an appropriate playback profile for my older hardware. Right now, I'm using the VDA Slim profile, which doesn't drop any frames but leaves some interlacing artifacts.
Overall, I'm quite happy with the setup. The MythTV front end interface has some rough edges, but the underlying features are rock solid so far, and you can't argue with the price. It's great to finally have basic DVR functionality; watching TV "live" now seems very primitive.

Sunday, May 19, 2013

Gmail Notifr

I recently discovered Gmail Notifr (via The Changelog), and in spite of the odd name, it's the best Gmail notifier for Mac that I've found. I figure it's time to reduce my dependence on Sparrow, since it's unmaintained, and so far this notifier + Gmail in Firefox is working well (with multiple accounts).

Sunday, April 21, 2013

Infinite Call-String Sensitivity and Object Sensitivity are Incomparable

I just returned from a fantastic week at Dagstuhl, discussing the ins and outs of pointer analysis. I wanted to briefly write up one technical point regarding the comparability of call-string-sensitive and object-sensitive points-to analysis that seemed to not be as widely known as I had thought. For some background on terminology, etc., you can see Milanova et al., Lhoták and Hendren, Smaragdakis et al., or our recent book chapter (not an exhaustive list).

Milanova et al.'s original work on object sensitivity states:

In general, object sensitivity (an instance of the functional approach to context sensitivity [Sharir and Pnueli 1981]) and call chain context sensitivity (an instance of the call string approach) are incomparable in terms of precision.
But, no examples are given to illustrate this point. It's fairly clear how to construct examples showing that k-limited call strings can be more precise than k-limited object sensitivity, and vice versa. But, without k-limiting, i.e., with "infinite" call strings or object sensitivity (ignore recursion for the moment), the question of incomparability becomes a bit murkier. So, here are a couple of examples to show that incomparability holds even without k-limiting.

An example showing call strings to be more precise than object sensitivity isn't too tricky: (The comments name the objects being allocated or indicate the points-to relationships caused by the statement.) A call-string-sensitive analysis analyzes the calls at lines 10 and 11 separately, but an object-sensitive analysis does not, since both have the same receiver object a1. Hence, the object-sensitive analysis computes the imprecise points-to facts z -> o2 and w -> o1. Note that k-limiting plays no role here.

The example showing object sensitivity to be more precise is a bit trickier (and perhaps less widely known): Here, p is a boolean with some unknown value, so z can point to either a1 or a2. With object sensitivity, the call at line 12 is analyzed separately for receiver values a1 and a2, yielding the points-to facts shown. A call-string-sensitive analysis only analyzes the call once, and therefore computes the imprecise facts a1.f -> a2 and a2.f -> a1. Again, k-limiting plays no role. This example shows that object sensitivity is providing a form of path sensitivity that is truly orthogonal to the call-return matching obtained via call strings.

Note that both of these examples are contrived, and it's unclear whether this incomparability has any implications for analysis of real programs, since for the most part, only k-limited variants are used in practice. For an empirical evaluation of call strings vs. object contexts, see Lhoták and Hendren.

Anyway, hopefully this post has been useful in clarifying a small point regarding these analyses.

Friday, February 08, 2013

Long-Form Article Search

There is a lot of long-form journalism and magazine-style writing available for free online these days (see Longreads, Longform, The Feature, etc.). While there are more new articles published each day than most people can keep up with, I've often wanted to search for long-form articles on a particular topic. So, I threw together a quick Google Custom Search:

(You can access the search page directly here.) I didn't spend much time on this, but it turns up surprisingly good results, perhaps because even articles that peripherally mention your search terms are often still interesting. Some examples:

The search returns results from sources that (1) I think have generally good writing, (2) make some / most of their articles freely available, and (3) are amenable to restricting results to longer articles via URL patterns. (I had to exclude a couple sources that I wanted since their sites didn't meet criterion (2) and/or (3).) It's not perfect; e.g., searches for actors / directors will often turn up a lot of film reviews. But, it seems to work reasonably well. If you have any feedback, let me know.

Friday, November 23, 2012

On Conference Paper Bidding

I spent nearly a full day bidding for papers to review for a couple of conferences last week, which led me to think about bidding strategies. As brief background, program committee (PC) members for a conference are often asked to bid on paper submissions to help the PC chair assign reviewers to papers. Typically, possible bids are something like:

  • high interest
  • moderate interest
  • definitely not interested (i.e., block the paper)
  • neutral

By default, a neutral bid is entered for all papers. A reviewer could be assigned any paper that she hasn’t blocked, but the PC chair attempts to line up the assignments with reviewer interests as much as possible, subject to other constraints (e.g., a minimum number of reviewers per paper).

One question that comes up is when to block papers. I tend to not block papers: rather than deciding which papers to block, I try to enter enough moderate+high bids that it’s unlikely I’ll be assigned many papers that I haven’t bid on. This strategy has worked ok thus far, but I wonder if it would be worth the additional effort to distinguish papers that I really don’t want to review.

I also wonder how to balance current interests vs. expertise. Say you’re an expert in some research topic A, but your current research interests are in other areas. Should you bid on papers on topic A since you can give an expert review, even though you’re not so interested in that area anymore? Or should you not bid on those papers, to increase the likelihood of being assigned papers you are more interested in? In such cases, I’ve mostly been giving moderate interest bids to topic A papers, particularly if I’ve seen papers on A accepted to other conferences that I would have reviewed negatively. But, I’ve heard of reviewers going as far as simply blocking papers they don’t want to review, independent of expertise level.

Anyway, I’m relatively new to this process, so I’d be interested to hear thoughts on bidding strategies that are good for reviewers, good for overall paper review quality, etc. Pointers to existing resources are also welcome (I found this interesting discussion via a quick search).

Update: See Mike Hicks's comment below on a cool solution to the interests vs. expertise issue he devised for POPL 2012.

Update (12/4): Check out the Toronto paper matching system, which does an algorithmic paper assignment based both on bids and by matching submitted papers against a "publication profile" for each PC member.

Wednesday, February 29, 2012

Online Supplements for Alex Ross's Books

I've been slowly working my way through Listen to This, Alex Ross's excellent second book. His first book, The Rest Is Noise, was also mind-blowingly good. I wanted to point out that the online audio guides for these books are also highly worthwhile, with many music excerpts to go along with each chapter of each book. (The only problem is you may spend a lot more time on those pages than you intended.) I've already bought a couple recordings based on the excerpts he linked, and I still have many chapters to go.

Saturday, February 25, 2012

Ranking Shows by Season using IMDB

I enjoyed this recent post by Matt Zoller Seitz ranking the first 14 seasons of The Simpsons. I wondered how this ranking would compare to a ranking based on episode ratings on IMDB. I did some tinkering with node.js and jsdom and came up with this. Sorting the output (and eliding non-existent seasons) yields:

8.34 average rating for season 6
8.33 average rating for season 5
8.31 average rating for season 7
8.26 average rating for season 4
8.18 average rating for season 8
8.16 average rating for season 3
8.01 average rating for season 2
7.91 average rating for season 9
7.82 average rating for season 1
7.63 average rating for season 10
7.39 average rating for season 12
7.35 average rating for season 11
7.17 average rating for season 23
7.10 average rating for season 16
7.09 average rating for season 18
7.04 average rating for season 13
6.98 average rating for season 19
6.96 average rating for season 17
6.96 average rating for season 15
6.92 average rating for season 20
6.91 average rating for season 22
6.90 average rating for season 21
6.90 average rating for season 14

I didn't do anything fancy like try to weight ratings by the number of votes, but the results are still interesting, and pretty similar to Zoller Seitz's ranking. To try another show, just change the URL in the script to the IMDB URL for the show with /eprate tacked on the end.