Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error in Model.fit_generator #11

Open
saurabhkodag opened this issue Feb 15, 2021 · 6 comments
Open

error in Model.fit_generator #11

saurabhkodag opened this issue Feb 15, 2021 · 6 comments

Comments

@saurabhkodag
Copy link

getting Model.fit_generator error on google colab but on local pc its working properly

image
image

@saurabhkodag
Copy link
Author

saurabhkodag commented Feb 15, 2021

i can help it..

Thanks please help me

@saurabhkodag
Copy link
Author

saurabhkodag commented Feb 16, 2021

def data_generator(batch_size = 32):
  partial_caps = []
  next_words = []
  images = []
  df = pd.read_csv('flickr8k_training_dataset.txt', delimiter='\t')
  df = df.sample(frac=1)
  iter = df.iterrows()
  c = []
  imgs = []
  for i in range(df.shape[0]):
    x = next(iter)
    c.append(x[1][1])
    imgs.append(x[1][0])

  count = 0
  while True:
    for j, text in enumerate(c):
      current_image = encoding_train[imgs[j]]
      for i in range(len(text.split())-1):
        count += 1
        partial = [word2idx[txt] for txt in text.split()[:i+1]]
        partial_caps.append(partial)
        n = np.zeros(vocab_size)
        n[word2idx[text.split()[i+1]]] = 1
        next_words.append(n)
        images.append(current_image)

      if count>=batch_size:
        next_words = np.asarray(next_words)
        images = np.asarray(images)
        partial_caps = sequence.pad_sequences(partial_caps, maxlen=max_len, padding=
        'post')
        yield ([images, partial_caps], next_words) # change is here
        partial_caps = []
        next_words = []
        images = []
        count = 0

same error

Changed the below function to yours given function but getting the same error it is in load_data.py which is in utils folder

def data_generator(images, captions, tokenizer, max_length, batch_size, random_seed):
  # Setting random seed for reproducibility of results
  random.seed(random_seed)
  # Image ids
  image_ids = list(captions.keys())
  _count=0
  assert batch_size<= len(image_ids), 'Batch size must be less than or equal to {}'.format(len(image_ids))
  while True:
    if _count >= len(image_ids):
      # Generator exceeded or reached the end so restart it
      _count = 0
    # Batch list to store data
    input_img_batch, input_sequence_batch, output_word_batch = list(), list(), list()
    for i in range(_count, min(len(image_ids), _count+batch_size)):
      # Retrieve the image id
      image_id = image_ids[i]
      # Retrieve the image features
      image = images[image_id][0]
      # Retrieve the captions list
      captions_list = captions[image_id]
      # Shuffle captions list
      random.shuffle(captions_list)
      input_img, input_sequence, output_word = create_sequences(tokenizer, max_length, captions_list, image)
      # Add to batch
      for j in range(len(input_img)):
        input_img_batch.append(input_img[j])
        input_sequence_batch.append(input_sequence[j])
        output_word_batch.append(output_word[j])
    _count = _count + batch_size
    yield [[np.array(input_img_batch), np.array(input_sequence_batch)], np.array(output_word_batch)]

@Manuj229
Copy link

Manuj229 commented Jul 7, 2021

any solutions?

@saurabhkodag
Copy link
Author

saurabhkodag commented Jul 7, 2021 via email

@Manuj229
Copy link

Manuj229 commented Jul 7, 2021

I am still having same issue. Please provide a solution.

@saurabhkodag
Copy link
Author

saurabhkodag commented Jul 7, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants