Skip to content
This repository has been archived by the owner on Feb 17, 2023. It is now read-only.

Dirty metadata buffer #46

Closed
andsens opened this issue Jan 17, 2013 · 12 comments
Closed

Dirty metadata buffer #46

andsens opened this issue Jan 17, 2013 · 12 comments
Assignees

Comments

@andsens
Copy link
Owner

andsens commented Jan 17, 2013

Somethings amiss on bootup. dmesg says:

JBD: Spotted dirty metadata buffer (dev = xvda1, blocknr = 1). There's a risk of filesystem corruption in case of system crash.
@ghost ghost assigned andsens Jan 21, 2013
@yanfalies
Copy link

I had this problem. I created a plugin which added mount options for ext file systems. barrier=0 fixes it.

@andsens
Copy link
Owner Author

andsens commented Mar 29, 2013

Thing is, I already do that.

@mioi
Copy link

mioi commented Apr 4, 2013

Hi there, I just started encountering this same problem yesterday. Launching a new ec2 instance with a freshly-built AMI works fine (except for the "JBD: Spotted dirty metadata buffer" in dmesg) but upon reboot, it fails to come up. Before rebooting it though, executing this seems to fix it (ignoring the warnings):

fsck -v /dev/xvda1

But still, have you come up with what is causing this? Thanks.

@andsens
Copy link
Owner Author

andsens commented Apr 4, 2013

That's good info! Hm, I wonder what needs repairing though. Does fsck output any details on this?
Maybe tune2fs is messing with something in 12-format-volume.

@andsens
Copy link
Owner Author

andsens commented Apr 4, 2013

[...] but upon reboot, it fails to come up. Before rebooting it though, executing this seems to fix it

On second I am not exactly sure what you mean. Does it only appear on first boot or everytime?

@yanfalies
Copy link

I've seen it happen too, but after a reboot. Anecdotally I've heard this just happens on Amazon. It probably wouldn't hurt to run e2fsck -f on the newly formatted partition after unmounting.

@mioi
Copy link

mioi commented Apr 8, 2013

So i commented out the case block at the bottom of tasks/50-ec2-scripts where it wants to install the change-root-uuid init.d script, and i don't see this dirty metadata buffer warning anymore.

is there some reason you want to change the uuid of the root partition to a random uuid?

@andsens
Copy link
Owner Author

andsens commented Apr 8, 2013

Yes. Hm, so this might actually be tune2fs itself saying this? I guess changing the UUID of the root volume on boot is not a healthy thing to do, but I'm not sure how to go about that case otherwise.

@mioi
Copy link

mioi commented Apr 9, 2013

maybe instead of changing the uuid of the root volume at first bootup, you can use the 'nouuid' flag with your mount command when attaching ebs volumes with duplicate uuid's during troubleshooting. i am not sure if/how this work as i have never done it.

@andsens
Copy link
Owner Author

andsens commented Apr 9, 2013

Yes of course I can do that. I am thinking of all the poor folks out there who will have to google their way around a wonky error message. That's why I made this script in the first place, you really have to know unix before thinking of doing this.

@yanfalies
Copy link

How about you come at this problem side ways. If the JBD code is really the problem. Disable the journal on ext4 after you umount the final volume and before snapshotting using:

tune2fs -O ^has_journal

On first boot,

  1. run the UUID randomizer task
  2. renable journal option
  3. force fsck on next boot.
  4. then reboot.

@andsens
Copy link
Owner Author

andsens commented Apr 10, 2013

Disable the journal on ext4 after you umount the final volume

This is certainly a solution, but it feels very hack-y and could mess with the stability of the image. If there is no other way, I would rather disable the script and add a hint in the yet-to-be-written documentation.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants