Skip to content

Crash in tf.raw_ops.ResourceSparseApplyAdagradDA #94130

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
SilentTester73 opened this issue May 25, 2025 · 1 comment
Open

Crash in tf.raw_ops.ResourceSparseApplyAdagradDA #94130

SilentTester73 opened this issue May 25, 2025 · 1 comment
Assignees
Labels
comp:ops OPs related issues TF 2.19 type:bug Bug

Comments

@SilentTester73
Copy link

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

binary

TensorFlow version

2.20.0-dev20250516

Custom code

Yes

OS platform and distribution

No response

Mobile device

No response

Python version

3.12.3

Bazel version

No response

GCC/compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

The tf.raw_ops.ResourceSparseApplyAdagradDA operation causes a fatal crash with the error Check failed: d < dims() (1 vs. 0) when called with a scalar gradient tensor and multi-dimensional indices tensor.

Colab Reproduction: https://colab.research.google.com/drive/1X_plMhFhjig9v4zwmIHkEiyQocY5vXfA?usp=sharing

Stack Trace Location: The crash occurs in tensorflow::SparseApplyAdagradDAOp<float, int>::Compute() at line 2467 in /tensorflow/core/kernels/training_ops.cc, specifically when calling `Tensor::dim_size(int)`` on a tensor with incompatible dimensions.

Standalone code to reproduce the issue

import tensorflow as tf

# Create variable and accumulators
var = tf.Variable([[0.0, 0.0]] * 10, dtype=tf.float32)
gradient_accumulator = tf.Variable([[0.0, 0.0]] * 10, dtype=tf.float32)
gradient_squared_accumulator = tf.Variable([[0.0, 0.0]] * 10, dtype=tf.float32)

# Problematic parameters - scalar grad with 2D indices
grad = tf.constant(0.0, dtype=tf.float32)  # Scalar gradient
indices = tf.constant([0, 0], dtype=tf.int32)  # 2D indices

# Other parameters
lr = tf.constant(0.0, dtype=tf.float32)
l1 = tf.constant(0.0, dtype=tf.float32)
l2 = tf.constant(0.0, dtype=tf.float32)
global_step = tf.constant(1, dtype=tf.int64)

# This will crash
tf.raw_ops.ResourceSparseApplyAdagradDA(
    var=var.handle,
    gradient_accumulator=gradient_accumulator.handle,
    gradient_squared_accumulator=gradient_squared_accumulator.handle,
    grad=grad,
    indices=indices,
    lr=lr,
    l1=l1,
    l2=l2,
    global_step=global_step,
    use_locking=True
)

Relevant log output

2025-05-25 19:23:06.603818: F tensorflow/core/framework/tensor_shape.cc:359] Check failed: d < dims() (1 vs. 0)
Aborted (core dumped)
@Venkat6871
Copy link
Contributor

I was able to reproduce the same issue using TensorFlow 2.19.0 as well as the nightly version. Please find the gist for your reference.
Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:ops OPs related issues TF 2.19 type:bug Bug
Projects
None yet
Development

No branches or pull requests

2 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy