I just finished the rough draft of my book. All 11 chapters. That's right, eleven chapters (not the planned ten). After 10 months of working on this book, I finally have a rough draft done. Also, ten months is a long time and the software (Tensorflow) has changed a bit. This means I will have to go back and update the chapters, as some functionality is changing or being depreciated. But that's what you get for writing a book about software that is still technically in beta.
I must say, some chapters were much harder than others. I should have planned for this in my time-lines. Because of this, I fell behind schedule around chapter 8. Chapter 8 and 9 were technically the hardest and required much of my time pouring through documentation and debugging.
For a few examples in the book, I had to accept that the official documentation (and other tutorials) have better and more in-depth explanations. But the publishing company really wanted me to cover those examples. I had to make some decisions on how to do this. Do I just repeat what is out there? Do I try to cover it in more depth? Do I skirt around the issue and reference better work? After some debate, I came to the following conclusion.
I will not reiterate official tutorials and documentation where it is explained better. I don't want someone to pay for a book that has information in it that is free elsewhere.
There was only one section that I reiterated code from the official software tutorials. I decided to do this because the official tutorial was really lacking on code explanations and felt kinda hand-wavy. Because of this, I referenced the official tutorial and told the readers that we will instead explore the tutorial much more in depth. For reference, this is the 'deep-dream' tutorial in chapter 8.
For another section, I decided to concentrate on preparing a different type of dataset for usage in the official tutorials. I felt this was a great way to do this because even though the official tutorials used a canned dataset, their methodology was sound.
I have been updating the github repository here: https://github.com/nfmcclure/tensorflow_cookbook. You can find the python scripts and about 50% of the scripts have accompanying documentation. Over the next month, I will be adding more documentation as well as accompanying Jupyter notebooks. Also, I'm told the editing of the book starts soon.
Overall, my journey on writing a book (so far) has been a huge learning experience and an even bigger time drain. I have a lot more respect for authors as well. I'm not sure that I will be so eager to write a book next time either.
Soon, I can concentrate on being social, active, and posting about something other than this time consuming book.
Thank you very much for the nice exercise and exploring all tensorflow algorithm,i would appreciate a lot if you could let us know what version of tensorflow they were tested on
Hi Elie, They were tested on Tensorflow for Linux, version #0.12. I am in the process of building a script that will test the code for every new release of Tensorflow.
Just bought the book, after playing with the github examples that were posted in O'Reilly's Data newletter yesterday. It is much appreciated that you are ensuring things work on all version of TF. One of the few frustrations I have had with GCP and TensorFlow's own examples and demo pages is exactly that -- tweaking code for TF version changes. I can imagine it is an insane amount of work, especially with TF changing under your feet.
I have spotted a few typos in the Jupyter notebooks, which is to be expected. I am sure they will come out in the course of editing, but happy to issue pull requests also?
Hi John, Any typo/formatting/bug pull requests are much appreciated! And yes, upkeeping the code has been a major chore over the past few months. I've basically given up on the RNN code for the past month, mostly due to the little amount of time I have and I'm waiting for them to just decide on where to put function and what to call them (The keep changing them).
You'd better finish before they change the coding style entirely 🙂 Define by run vs define and run: https://news.ycombinator.com/item?id=13428098
Sorry to bring this up after reading your comments on how much work writing a book like this is, however, it is probably worth a brief mention of Google's recently announced TPUs, https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu in one in chapter 10.
Yea, I saw this. I've been meaning to add this and a few other things. I also want to add chapters on GANs and reinforcement learning. I hopefully will get around to new content this summer.