Benchmarking SPDY vs. HTTP in 4 steps

Conclusions first!: SPDY’s efficiency (header compression, TCP windowing && slow start, etc.) not only allows for high throttles, but the inline images are pushed along with the index, relieving the browser of first parsing/requesting the resources before they are sent. While SPDY’s response time was faster than HTTP for a ...

Most Popular

  • Visualizing how kernel 3.0’s initial congestion window increase is lowering response times

    When the recent IETF internet draft matures to an RFC, it’ll be the first increase in initial window (cwnd / TCP_INIT_CWND)  increase since 2002. The implementation has already made its way into 2.6.39 earlier this year and I thought I’d take 3.0 for a spin and demonstrate the increase in small object acceleration it yields.  I’m testing using a VPS node 100ms RTT away and loading objects ranging from 4kB to 128kB :

    image

    image

    image

    image

    The head start the large congestion window offers favors smaller objects and in the 8kB range, the entire content can be sent in a single round trip:

     

    image

     

    image

  • Seeing EDNS client-subnet in two steps

    1. Build a dig client with support

    2. Query an Auth that speaks the language

    Now that we have a compiled version of dig that supports including the client subnet into the query we’re able to query authoritative servers with the flags enabled.
    Here’s what a regular query for our favorite video site looks like:

    image
    Notice that the A records handed back are in North America. Now let’s resolve the record for a client in China:

    image
    The response now has an additional CLIENT-SUBNET flag specifying this response is only valid for that subnet. The next difference is the lack of A records in the response, instead we get a CNAME chain which’ll require another lookup.

    On the UDP side, an additional record of type OPT is included in both request and response with the extended data. At this time Wireshark doesn’t support displaying the specific data but a patch is available @ https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=7552

    image

  • 10 things I didn’t know about Amazon’s Cloudfront

    After having migrated my blog to Amazon Web Services I decided to accelerate it using their CDN offering. Overkill? Perhaps. Gratifying? Absolutely!  With almost 20 worldwide PoPs the response times as seen by Pingdom plummeted during my migration last month:

    image

    Here are 10 things I didn’t know going in:

    1. Cloudfront is barebones, offering only simple static caching. There are no accelerated proxies or advanced features like header manipulation, url rewriting, cookie exchanges, etc.

    2. It is reliable and fast. In San Jose, I’m getting over a 5x improvement in response times compared to only using the EC2 origin:

    image

    Here are my numbers for the past 30 days based on Pingdom’s global polling:

    image image

    3. Origin max-age directives less than 3600 are rounded up to an hour so if your content is updated more frequently you’ll need to use invalidation, versioning, or not cache it at all.

    4. There is no UI for invalidating content, it’s all done via APIs that you need to build, and there are costly monthly limits. Here’s a PHP implementation for single file invalidation.

    5. If you want even more speed, consider using their “Route 53” DNS service which you can manage from within the same console as CloudFront’s.  Their authoritative DNS servers are in the same 20 worldwide PoPs.

    6. Updating distributions (CNAMEs, invalidations, enabling https, etc.) can take 20 or more minutes to push to all edges.

    7. Logging is disabled by default.  To enable it you’ll need to have an S3 bucket space.

    8. CF has an aliases feature so take advantage of it to enable domain sharding. By using 2 or more CNAMEs the browser can make more concurrent requests. I’m using cdn and www.

    9. CloudFront makes HTTP 1.0 requests so be sure your origin still correctly responds with gzipped content.  For example in nginx, uncompressed files are served even if compressed ones are requested for 1.0 requests.  To override this you can add this to your nginx.conf: “gzip_http_version 1.0;”

    10. CloudFront is not included in the 1 year free Amazon AWS offer so expect a bill for CF as well as for any origin fetch bandwidth that exceeds your free monthly aggregated bandwidth.  There are 2 monthly fees, GB out (about 2 dimes per GB) and # of requests (‘bout a penny per 10k). You get lower prices if you commit for more. My bill for the month was 25 cents (~50k object requests):

    image

    Looking back, moving to EC2 and Cloudfront was a sound decision which not only reduced my monthly VPS expenses but greatly improved performance and reliability.

  • High speed ffmpeg cluster encoding with Python and avidemux

    When it comes to clustered video codec conversion there are two general scenarios:

    Scenario 1: Encoding many videos across many computers
    Scenario 2: Encoding a single video across computers

    Scenario 1 is ubiquitous and most encoding clusters are likely running at full steam with a backlog of videos waiting in queue. Scenario 2 is less common and useful with deadlines, where concertedly converting a single video across your cluster would reduce time tremendously.

    I searched the google cavern for scenario 2 and didn’t find any existing ffmpeg cluster implementations so I spent my Sunday afternoon writing a python script to do just that.  Now, using the 4 pcs at home I’m converting a single video 300% faster.  So how does it work?  In a sentence, I split the encoding into ffmpeg tasks (using –ss and –t), distribute the tasks to my cluster, and copy the parts into the final version using avidemux (–append and –rebuild-index).   Is it perfect?  Probably far from it.  But as a first draft it worked great.  I tested several sources and formats and the video/audio merged seamlessly and in sync.  The code has no error catching and you may need to massage the code to work in your setup.  I’ll work on a second draft converting to h.264 instead of flv.

    
    #!/usr/bin/python
    # Version 0.1
    # Big todo is adding error catching
    
    import sys
    import os
    from re import search
    from subprocess import PIPE, Popen
    
    #configure the two parameters below
    #1. The name of all the hosts in the cluster that will participate
    hostList = ['one', 'two', 'three', 'four']
    #2. The NFS mounted dir which contains the video you need encoded
    encodeDir = "/net/ffcluster"
    
    #Function definitions
    def getDurationPerJob(totalFrames, fps):
    return totalFrames / float(fps) / len(hostList)
    
    def getFps(file):
    information = Popen(("ffmpeg", "-i", file), stdout=PIPE, stderr=PIPE)
    #fetching tbr (1), but can also get tbn (2) or tbc (3)
    #examples of fps syntax encountered is 30, 30.00, 30k
    fpsSearch = search("(\d+\.?\w*) tbr, (\d+\.?\w*) tbn, (\d+\.?\w*) tbc", information.communicate()[1])
    return fpsSearch.group(1)
    
    def getTotalFrames(file, fps):
    information = Popen(("ffmpeg", "-i", file), stdout=PIPE, stderr=PIPE)
    timecode = search("(\d+):(\d+):(\d+).(\d+)", information.communicate()[1])
    return ((((float(timecode.group(1)) * 60) + float(timecode.group(2))) * 60) + float(timecode.group(3)) + float(timecode.group(4))/100) * float(fps)
    
    def clusterRun(file, fileName, durationPerJob, fps):
    start = 0.0
    end = durationPerJob
    runCount=0
    jobList=[]
    #submits equal conversion portions to each host
    for i in hostList:
    runCount += 1
    runFfmpeg = "ssh %s 'cd %s;ffmpeg -ss %f -t %f -y -i %s %s </dev/null'" % (i, encodeDir, start, end, file, fileName + "_run" + str(runCount) + ".flv")
    start += end + 1/float(fps)
    jobList.append(Popen(runFfmpeg, shell=True))
    #wait for all jobs to complete
    runCount=0
    for i in hostList:
    jobList[runCount].wait()
    runCount += 1
    #append/rebuild final from parts and rebuild index
    avidemuxHead = "avidemux2_cli --autoindex --load %s_run1.flv --append %s_run2.flv " % (fileName, fileName)
    avidemuxTail = "--audio-codec copy --video-codec copy --save %sFinal.flv" % (fileName)
    #add --appends for additional host above the first 2
    for i in range(len(hostList)- 2):
    avidemuxHead = "%s --append %s_run%d.flv " % (avidemuxHead, fileName, i+3)
    runAvidemux = "%s %s" % (avidemuxHead, avidemuxTail)
    Popen(runAvidemux, shell=True)
    
    #Main begin
    sourceFile = sys.argv[1]
    fps = getFps(sourceFile)
    totalFrames = getTotalFrames(sourceFile, fps)
    durationPerJob = getDurationPerJob(totalFrames, fps)
    fileName = os.path.splitext(sourceFile)[0]
    
    clusterRun(sourceFile, fileName, durationPerJob, fps)