I’m trying to export large (e.g. 16M rows) SQL Server 2008 views for eventual import into MySQL. The underlying setup for these views is bizarre: A SELECT
for a single row can take over two minutes, a SELECT * FROM view
with no WHERE
clause is much faster, but still too slow to finish an export in any client we’ve connected with. The only reasonable solution I’ve found is to use SQL Server Management Studio, right-click the DB > Tasks > Export Data… and target a flat file. This seriously seems about 100x as fast as a direct query in SSMS and 200x as fast as an ODBC connection.
There’s only one (major) gotcha: The flat file writer in SQL Server Management Studio (ver 10.0.2531.0) is broken for CSV output. There’s no configuration I can find that will correctly escape characters that need to be. By default it doesn’t enclose any values in quotes, but even if you specify the double quote as “Text Qualifier” (yay for obscure proprietary descriptions), double quotes within values won’t be escaped in any way. Not with \"
nor ""
(the CSV standard). This means you can either export fixed-width columns, or try to find chars that never exist in your data to use as field/line delimiters. The escape char \
is also not escaped, so you could end up with a line feed character in your import when the original data was literally “foo\road”. Stupid.
Again, I can’t just select all rows in the tool and then save the resulting recordset as CSV (which isn’t broken) because the views are so big that the queries time out (the resultsets would probably crash the tool anyway) and because there’s no way to partition the query that doesn’t destroy performance. I’m going to have to create a local SQL Server DB and do a DB to DB transfer, then use a client connection to export.
Update: You can at least use obscure Unicode chars as delimiters, though it outputs invalid UTF-8 chars after the first row. You have to export as “Unicode” (UTF-16 little-endian. 2-byte chars with a leading \xFF\xFE
BOM). If you pick delimiters like ʖ and ʁ and go this path, you won’t be able to use traditional line-by-line file reading functions. You’ll have to stream a fixed number of bytes at a time, building up columns (and validating that the number of columns/row stays constant) and converting to rational CSV that can be handled by MySQL. Another vote for setting up a local SQL Server instance.