-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document new ideas for RAID management on the storage configuration #1671
Conversation
Let me draft here a possible algorithm that could handle the auto-calculation of sizes explained at the document. It would imply adding some fields to
|
Generally, the different members of an MD Array should have the same size. In fact, That does not play very well with size ranges. For example, assume two members of the same RAID1, both with a min of 40GiB and a max of 100GiB. Imagine a chose PartitionsDistribution in which one gets located in a space in which it can grow up to 100GiB and the other in a space in which it can only grow up to 45GiB. It actually makes no sense for the first one to grow so much (and maybe that space can be used for other partition that also could make use of it). To mitigate that we can process the spaces of the distribution in a sensible order, starting by those in which there are RAID members but have a smaller extra size. After processing each space, we can register for each MD what's the smallest member. When processing the next space, we can limit the growth of members of the same MD. This looks like a good place to insert that logic: class Y2Storage::Proposal::PartitionCreator
def create_partitions(distribution)
self.devicegraph = original_graph.duplicate
devices_map = distribution.spaces.reduce({}) do |devices, space|
new_devices = process_free_space(
space.disk_space, space.partitions, space.usable_size, space.num_logical
)
devices.merge(new_devices)
end
CreatorResult.new(devicegraph, devices_map)
end
end If needed, we could also consider adding a criteria of "best distribution" minimizing the opportunity to locate members of the same array in spaces that would result in situations like the one described above. |
"generate": { | ||
"mdRaids": "default", | ||
"level": "raid0", | ||
"devices": [ | ||
{ "generate": ["sda", "sdb"] } | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing :)
{ | ||
"devices": [ { "generate": ["sda", "sdb" ] } ], | ||
"level": "raid0", | ||
"size": "40 GiB" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if both generate and the raid indicate a size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes no much sense, I would say.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, but the schema would admit it. So, I guess we will need to check that cases in ConfigChecker
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Update to release version 11. * #1495 * #1564 * #1617 * #1618 * #1625 * #1626 * #1627 * #1628 * #1630 * #1631 * #1632 * #1633 * #1634 * #1635 * #1636 * #1639 * #1640 * #1641 * #1642 * #1643 * #1644 * #1645 * #1646 * #1647 * #1648 * #1649 * #1650 * #1651 * #1652 * #1654 * #1655 * #1656 * #1657 * #1660 * #1663 * #1666 * #1667 * #1668 * #1670 * #1671 * #1673 * #1674 * #1675 * #1676 * #1677 * #1681 * #1682 * #1683 * #1684 * #1687 * #1688 * #1689 * #1690 * #1691 * #1692 * #1693 * #1694 * #1695 * #1696 * #1698 * #1699 * #1702 * #1703 * #1704 * #1705 * #1707 * #1708 * #1709 * #1710 * #1711 * #1712 * #1713 * #1714 * #1715 * #1716 * #1717 * #1718 * #1720 * #1721 * #1722 * #1723 * #1727 * #1728 * #1729 * #1731 * #1732 * #1733 * #1734 * #1735 * #1736 * #1737 * #1740 * #1741 * #1743 * #1744 * #1745 * #1746 * #1751 * #1753 * #1754 * #1755 * #1757 * #1762 * #1763 * #1764 * #1765 * #1766 * #1767 * #1769 * #1771 * #1772 * #1773 * #1774 * #1777 * #1778 * #1785 * #1786 * #1787 * #1788 * #1789 * #1790 * #1791 * #1792 * #1793 * #1794 * #1795 * #1796 * #1797 * #1798 * #1799 * #1800 * #1802 * #1803 * #1804 * #1805 * #1807 * #1808 * #1809 * #1810 * #1811 * #1812 * #1814 * #1815 * #1821 * #1822 * #1823 * #1824 * #1825 * #1826 * #1827 * #1828 * #1830 * #1831 * #1832 * #1833 * #1834 * #1835 * #1836 * #1837 * #1838 * #1839 * #1840 * #1841 * #1842 * #1843 * #1844 * #1845 * #1847 * #1848 * #1849 * #1850 * #1851 * #1854 * #1855 * #1856 * #1857 * #1860 * #1861 * #1863 * #1864 * #1865 * #1866 * #1867 * #1871 * #1872 * #1873 * #1875 * #1876 * #1877 * #1878 * #1880 * #1881 * #1882 * #1883 * #1884 * #1885 * #1886 * #1888 * #1889 * #1890
We are considering the best way to integrate RAID configuration at Agama. That should cover both the configuration (used for example for unattended installation) and the web UI.
This pull request documents some of the ideas we have been discussing.
Bonus: Moved some of the content of
auto_storage.md
to a separate file to get the document more focused.