You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you reproduced the bug with TensorFlow Nightly?
Yes
Source
binary
TensorFlow version
2.20.0-dev20250516
Custom code
Yes
OS platform and distribution
No response
Mobile device
No response
Python version
3.12.3
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current behavior?
The tf.raw_ops.ResourceSparseApplyAdagradDA operation causes a fatal crash with the error Check failed: d < dims() (1 vs. 0) when called with a scalar gradient tensor and multi-dimensional indices tensor.
Stack Trace Location: The crash occurs in tensorflow::SparseApplyAdagradDAOp<float, int>::Compute() at line 2467 in /tensorflow/core/kernels/training_ops.cc, specifically when calling `Tensor::dim_size(int)`` on a tensor with incompatible dimensions.
Standalone code to reproduce the issue
import tensorflow as tf
# Create variable and accumulators
var = tf.Variable([[0.0, 0.0]] * 10, dtype=tf.float32)
gradient_accumulator = tf.Variable([[0.0, 0.0]] * 10, dtype=tf.float32)
gradient_squared_accumulator = tf.Variable([[0.0, 0.0]] * 10, dtype=tf.float32)
# Problematic parameters - scalar grad with 2D indices
grad = tf.constant(0.0, dtype=tf.float32) # Scalar gradient
indices = tf.constant([0, 0], dtype=tf.int32) # 2D indices# Other parameters
lr = tf.constant(0.0, dtype=tf.float32)
l1 = tf.constant(0.0, dtype=tf.float32)
l2 = tf.constant(0.0, dtype=tf.float32)
global_step = tf.constant(1, dtype=tf.int64)
# This will crash
tf.raw_ops.ResourceSparseApplyAdagradDA(
var=var.handle,
gradient_accumulator=gradient_accumulator.handle,
gradient_squared_accumulator=gradient_squared_accumulator.handle,
grad=grad,
indices=indices,
lr=lr,
l1=l1,
l2=l2,
global_step=global_step,
use_locking=True
)
Relevant log output
2025-05-25 19:23:06.603818: F tensorflow/core/framework/tensor_shape.cc:359] Check failed: d <dims() (1 vs. 0)
Aborted (core dumped)
The text was updated successfully, but these errors were encountered:
Issue type
Bug
Have you reproduced the bug with TensorFlow Nightly?
Yes
Source
binary
TensorFlow version
2.20.0-dev20250516
Custom code
Yes
OS platform and distribution
No response
Mobile device
No response
Python version
3.12.3
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current behavior?
The
tf.raw_ops.ResourceSparseApplyAdagradDA
operation causes a fatal crash with the errorCheck failed: d < dims() (1 vs. 0)
when called with a scalar gradient tensor and multi-dimensional indices tensor.Colab Reproduction: https://colab.research.google.com/drive/1X_plMhFhjig9v4zwmIHkEiyQocY5vXfA?usp=sharing
Stack Trace Location: The crash occurs in
tensorflow::SparseApplyAdagradDAOp<float, int>::Compute()
at line 2467 in/tensorflow/core/kernels/training_ops.cc
, specifically when calling `Tensor::dim_size(int)`` on a tensor with incompatible dimensions.Standalone code to reproduce the issue
Relevant log output
The text was updated successfully, but these errors were encountered: