Day 479
Vim exclamation mark to switch parameters
A ! either forces the action or toggles the action:
:set cursorline, to turn off: :set nocursorline
Is equivalent to:
:set cursorline! 1
python tabulate module
tabulate2 generates nice tables of various formats! Things like:
print(tabulate.tabulate(db,headers=db.columns))
      epoch    loss    val_loss    val f-score
--  -------  ------  ----------  -------------
 0        1    4.31        4.62          0.579
 1        2    3.72        3.61          0.705
 2        3    3.54        3.25          0.722
 3        4    3.31        3.06          0.737
 4        5    3.19        2.93          0.736
 5        1    4.31        4.62          0.581
 6        2    3.72        3.61          0.72
 7        3    3.54        3.25          0.755
 8        4    3.31        3.06          0.755
 9        5    3.19        2.93          0.764
10        6    3.12        2.83          0.798
11        7    2.95        2.76          0.779
12        8    2.91        2.69          0.757
13        9    2.84        2.64          0.816
14       10    2.68        2.63          0.835
15       11    2.71        2.56          0.83
16       12    2.69        2.52          0.825
17       13    2.62        2.49          0.826
18       14    2.6         2.46          0.845
19       15    2.56        2.44          0.84
tabulate · PyPI is the basic documentation with visualizations of each tablefmt. It even supports jira! And pipe is the usual markdown format. Let’s try:
| epoch | loss | val_loss | val f-score | |
|---|---|---|---|---|
| 0 | 1 | 4.31 | 4.62 | 0.579 | 
| 1 | 2 | 3.72 | 3.61 | 0.705 | 
| 2 | 3 | 3.54 | 3.25 | 0.722 | 
| 3 | 4 | 3.31 | 3.06 | 0.737 | 
| 4 | 5 | 3.19 | 2.93 | 0.736 | 
| 5 | 1 | 4.31 | 4.62 | 0.581 | 
| 6 | 2 | 3.72 | 3.61 | 0.72 | 
| 7 | 3 | 3.54 | 3.25 | 0.755 | 
| 8 | 4 | 3.31 | 3.06 | 0.755 | 
| 9 | 5 | 3.19 | 2.93 | 0.764 | 
| 10 | 6 | 3.12 | 2.83 | 0.798 | 
| 11 | 7 | 2.95 | 2.76 | 0.779 | 
| 12 | 8 | 2.91 | 2.69 | 0.757 | 
| 13 | 9 | 2.84 | 2.64 | 0.816 | 
| 14 | 10 | 2.68 | 2.63 | 0.835 | 
| 15 | 11 | 2.71 | 2.56 | 0.83 | 
| 16 | 12 | 2.69 | 2.52 | 0.825 | 
| 17 | 13 | 2.62 | 2.49 | 0.826 | 
| 18 | 14 | 2.6 | 2.46 | 0.845 | 
| 19 | 15 | 2.56 | 2.44 | 0.84 | 
Tensorflow how does training happen with nan? TODO
How does Tensorflow train stuff when loss is nan? It keeps doing something, accuracy changes, etc etc etc. - is the gradient calculated per batch as normal,
Note
Einstein / Netzah “do your own thing”
				
					Nel mezzo del deserto posso dire tutto quello che voglio.
				
			
comments powered by Disqus