Tokenize text file into sentences with Python

I recently needed to split a document into sentences in a way that handled most, if not all, of the annoying edge cases. After a frustrating period trying to get a snippet I found on Stackoverflow to work, I finally figured it out:

import nltk.data
import codecs
import os

doc = codecs.open('path/to/text/file/text.txt', 'r' 'utf-8')
content = doc.read()

tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')

print ('\n-----\n'.join(tokenizer.tokenize(content)))