最近更改别人的代码发现了1.x和2.x有很多去区别就特意找官网查了一下,以下就是官网原文
Keras 2 release notes
This document details changes, in particular API changes, occurring from Keras 1 to Keras 2.
Training
- The nb_epoch argument has been renamed epochs everywhere.
- The methods fit_generator, evaluate_generator and predict_generator now work by drawing a number of batches from a generator (number of training steps), rather than a number of samples.
- samples_per_epoch was renamed steps_per_epoch in fit_generator.
- nb_val_samples was renamed validation_steps in fit_generator.
- val_samples was renamed steps in evaluate_generator and predict_generator.
- It is now possible to manually add a loss to a model by calling model.add_loss(loss_tensor). The loss is added to the other losses of the model and minimized during training.
- It is also possible to not apply any loss to a specific model output. If you pass None as the lossargument for an output (e.g. in compile, loss={'output_1': None, 'output_2': 'mse'}, the model will expect no Numpy arrays to be fed for this output when using fit,train_on_batch, or fit_generator. The output values are still returned as usual when usingpredict.
- In TensorFlow, models can now be trained using fit if some of their inputs (or even all) are TensorFlow queues or variables, rather than placeholders. See this test for specific examples.
Losses & metrics
- The objectives module has been renamed losses.
- Several legacy metric functions have been removed, namely matthews_correlation,precision, recall, fbeta_score, fmeasure.
- Custom metric functions can no longer return a dict, they must return a single tensor.
Models
- Constructor arguments for Model have been renamed:
- input -> inputs
- output -> outputs
- The Sequential model not longer supports the set_input method.
- For any model saved with Keras 2.0 or higher, weights trained with backend X will be converted to work with backend Y without any manual conversion step.
Layers
Removals
Deprecated layers MaxoutDense, Highway and TimedistributedDense have been removed.
Call method
- All layers that use the learning phase now support a training argument in call (Python boolean or symbolic tensor), allowing to specify the learning phase on a layer-by-layer basis. E.g. by calling a Dropout instance as dropout(inputs, training=True) you obtain a layer that will always apply dropout, regardless of the current global learning phase. The trainingargument defaults to the global Keras learning phase everywhere.
- The call method of layers can now take arbitrary keyword arguments, e.g. you can define a custom layer with a call signature like call(inputs, alpha=0.5), and then pass a alphakeyword argument when calling the layer (only with the functional API, naturally).
- __call__ now makes use of TensorFlow name_scope, so that your TensorFlow graphs will look pretty and well-structured in TensorBoard.
All layers taking a legacy dim_ordering argument
dim_ordering has been renamed data_format. It now takes two values: "channels_first"(formerly "th") and "channels_last" (formerly "tf").
Dense layer
Changed interface:
- output_dim -> units
- init -> kernel_initializer
- added bias_initializer argument
- W_regularizer -> kernel_regularizer
- b_regularizer -> bias_regularizer
- b_constraint -> bias_constraint
- bias -> use_bias
Dropout, SpatialDropout*D, GaussianDropout
Changed interface:
- p -> rate
Embedding
Convolutional layers
- The AtrousConvolution1D and AtrousConvolution2D layer have been deprecated. Their functionality is instead supported via the dilation_rate argument in Convolution1D andConvolution2D layers.
- Convolution* layers are renamed Conv*.
- The Deconvolution2D layer is renamed Conv2DTranspose.
- The Conv2DTranspose layer no longer requires an output_shape argument, making its use much easier.
Interface changes common to all convolutional layers:
- nb_filter -> filters
- float kernel dimension arguments become a single tuple argument, kernel size. E.g. a legacy call Conv2D(10, 3, 3) becomes Conv2D(10, (3, 3))
- kernel_size can be set to an integer instead of a tuple, e.g. Conv2D(10, 3) is equivalent toConv2D(10, (3, 3)).
- subsample -> strides. Can also be set to an integer.
- border_mode -> padding
- init -> kernel_initializer
- added bias_initializer argument
- W_regularizer -> kernel_regularizer
- b_regularizer -> bias_regularizer
- b_constraint -> bias_constraint
- bias -> use_bias
- dim_ordering -> data_format
- In the SeparableConv2D layers, init is split into depthwise_initializer andpointwise_initializer.
- Added dilation_rate argument in Conv2D and Conv1D.
- 1D convolution kernels are now saved as a 3D tensor (instead of 4D as before).
- 2D and 3D convolution kernels are now saved in format spatial_dims + (input_depth, depth)), even with data_format="channels_first".
Pooling1D
- pool_length -> pool_size
- stride -> strides
- border_mode -> padding
Pooling2D, 3D
- border_mode -> padding
- dim_ordering -> data_format
ZeroPadding layers
The padding argument of the ZeroPadding2D and ZeroPadding3D layers must be a tuple of length 2 and 3 respectively. Each entry i contains by how much to pad the spatial dimension i. If it's an integer, symmetric padding is applied. If it's a tuple of integers, asymmetric padding is applied.
Upsampling1D
- length -> size
BatchNormalization
The mode argument of BatchNormalization has been removed; BatchNorm now only supports mode 0 (use batch metrics for feature-wise normalization during training, and use moving metrics for feature-wise normalization during testing).
- beta_init -> beta_initializer
- gamma_init -> gamma_initializer
- added arguments center, scale (booleans, whether to use a beta and gamma respectively)
- added arguments moving_mean_initializer, moving_variance_initializer
- added arguments beta_regularizer, gamma_regularizer
- added arguments beta_constraint, gamma_constraint
- attribute running_mean is renamed moving_mean
- attribute running_std is renamed moving_variance (it is in fact a variance with the current implementation).
ConvLSTM2D
Same changes as for convolutional layers and recurrent layers apply.
PReLU
- init -> alpha_initializer
GaussianNoise
- sigma -> stddev
Recurrent layers
- output_dim -> units
- init -> kernel_initializer
- inner_init -> recurrent_initializer
- added argument bias_initializer
- W_regularizer -> kernel_regularizer
- b_regularizer -> bias_regularizer
- added arguments kernel_constraint, recurrent_constraint, bias_constraint
- dropout_W -> dropout
- dropout_U -> recurrent_dropout
- consume_less -> implementation. String values have been replaced with integers: implementation 0 (default), 1 or 2.
- LSTM only: the argument forget_bias_init has been removed. Instead there is a boolean argument unit_forget_bias, defaulting to True.
Lambda
The Lambda layer now supports a mask argument.
Utilities
Utilities should now be imported from keras.utils rather than from specific submodules (e.g. no more keras.utils.np_utils...).
Backend
random_normal and truncated_normal
- std -> stddev
Misc
- In the backend, set_image_ordering and image_ordering are now set_data_format anddata_format.
- Any arguments (other than nb_epoch) prefixed with nb_ has been renamed to be prefixed with num_ instead. This affects two datasets and one preprocessing utility.