Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Created with
brew bump-formula-pr
.release notes
Breaking Changes
tf.summary.trace_on
now takes aprofiler_outdir
argument. This must be set ifprofiler
arg is set toTrue
.tf.summary.trace_export
'sprofiler_outdir
arg is now a no-op. Enabling the profiler now requires settingprofiler_outdir
intrace_on
.tf.estimator
Keras 3.0 will be the default Keras version. You may need to update your script to use Keras 3.0.
Please refer to the new Keras documentation for Keras 3.0 (https://keras.io/keras_3).
To continue using Keras 2.0, do the following.
Install
tf-keras
viapip install tf-keras~=2.16
To switch
tf.keras
to use Keras 2 (tf-keras
), set the environment variableTF_USE_LEGACY_KERAS=1
directly or in your python program withimport os;os.environ["TF_USE_LEGACY_KERAS"]="1"
. Please note that this will set it for all packages in your Python runtime programChange the keras import: replace
import tensorflow.keras as keras
orimport keras
withimport tf_keras as keras
. Update anytf.keras
references tokeras
.Apple Silicon users: If you previously installed TensorFlow using
pip install tensorflow-macos
, please update your installation method. Usepip install tensorflow
from now on.Mac x86 users: Mac x86 builds are being deprecated and will no longer be
released as a Pip package from TF 2.17 onwards.
Known Caveats
tensorflow
pypi repository and no longer redirect to a separate package.Major Features and Improvements
AMX-FP16 instruction set on X86 CPUs.
Bug Fixes and Other Changes
tf.lite
stablehlo.gather
.stablehlo.add
.stablehlo.multiply
.stablehlo.maximum
.stablehlo.minimum
.tfl.gather_nd
.tensorflow/lite/c/c_api_experimental.h
:TfLiteInterpreterGetVariableTensorCount
TfLiteInterpreterGetVariableTensor
TfLiteInterpreterGetBufferHandle
TfLiteInterpreterSetBufferHandle
tensorflow/lite/c/c_api_opaque.h
:TfLiteOpaqueTensorSetAllocationTypeToDynamic
tensorflow/lite/c/c_api.h
:TfLiteInterpreterOptionsEnableCancellation
TfLiteInterpreterCancel
tflite::SimpleDelegateInterface
class intensorflow/lite/delegates/utils/simple_delegate.h
,and likewise in the
tflite::SimpleOpaqueDelegateInterface
class intensorflow/lite/delegates/utils/simple_opaque_delegate.h
:CopyFromBufferHandle
CopyToBufferHandle
FreeBufferHandle
tf.train.CheckpointOptions
andtf.saved_model.SaveOptions
experimental_sharding_callback
. This is a callback function wrapper that will be executed to determine how tensors will be split into shards when the saver writes the checkpoint shards to disk.tf.train.experimental.ShardByTaskPolicy
is the default sharding behavior, buttf.train.experimental.MaxShardSizePolicy
can be used to shard the checkpoint with a maximum shard file size. Users with advanced use cases can also write their own customtf.train.experimental.ShardingCallback
s.tf.train.CheckpointOptions
experimental_skip_slot_variables
(a boolean option) to skip restoring of optimizer slot variables in a checkpoint.tf.saved_model.SaveOptions
SaveOptions
now takes a new argument calledexperimental_debug_stripper
. When enabled, this strips the debug nodes from both the node defs and the function defs of the graph. Note that this currently only strips theAssert
nodes from the graph and converts them intoNoOp
s instead.Keras
keras.layers.experimental.DynamicEmbedding
DynamicEmbedding
Keras layerDynamicEmbedding
layer allows for the continuous updating of the vocabulary and embeddings during the training process. This layer maintains a hash table to track the most up-to-date vocabulary based on the inputs received by the layer and the eviction policy. When this layer is used with anUpdateEmbeddingCallback
, which is a time-based callback, the vocabulary lookup tensor is updated at the time interval set in theUpdateEmbeddingCallback
based on the most up-to-date vocabulary hash table maintained by the layer. If this layer is not used in conjunction withUpdateEmbeddingCallback
the behavior of the layer would be same askeras.layers.Embedding
.keras.optimizers.Adam
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aakar Dwivedi, Akhil Goel, Alexander Grund, Alexander Pivovarov, Andrew Goodbody, Andrey Portnoy, Aneta Kaczyńska, AnetaKaczynska, ArkadebMisra, Ashiq Imran, Ayan Moitra, Ben Barsdell, Ben Creech, Benedikt Lorch, Bhavani Subramanian, Bianca Van Schaik, Chao, Chase Riley Roberts, Connor Flanagan, David Hall, David Svantesson, David Svantesson-Yeung, dependabot[bot], Dr. Christoph Mittendorf, Dragan Mladjenovic, ekuznetsov139, Eli Kobrin, Eugene Kuznetsov, Faijul Amin, Frédéric Bastien, fsx950223, gaoyiyeah, Gauri1 Deshpande, Gautam, Giulio C.N, guozhong.zhuang, Harshit Monish, James Hilliard, Jane Liu, Jaroslav Sevcik, jeffhataws, Jerome Massot, Jerry Ge, jglaser, jmaksymc, Kaixi Hou, kamaljeeti, Kamil Magierski, Koan-Sin Tan, lingzhi98, looi, Mahmoud Abuzaina, Malik Shahzad Muzaffar, Meekail Zain, mraunak, Neil Girdhar, Olli Lupton, Om Thakkar, Paul Strawder, Pavel Emeliyanenko, Pearu Peterson, pemeliya, Philipp Hack, Pierluigi Urru, Pratik Joshi, radekzc, Rafik Saliev, Ragu, Rahul Batra, rahulbatra85, Raunak, redwrasse, Rodrigo Gomes, ronaghy, Sachin Muradi, Shanbin Ke, shawnwang18, Sheng Yang, Shivam Mishra, Shu Wang, Strawder, Paul, Surya, sushreebarsa, Tai Ly, talyz, Thibaut Goetghebuer-Planchon, Tj Xu, Tom Allsop, Trevor Morris, Varghese, Jojimon, weihanmines, wenchenvincent, Wenjie Zheng, Who Who Who, Yasir Ashfaq, yasiribmcon, Yoshio Soma, Yuanqiang Liu, Yuriy Chernyshov