Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CURA-9331] Check if all paths are empty in supposedly non-empty wall-path. #1688

Merged
merged 2 commits into from
Jul 5, 2022

Conversation

rburema
Copy link
Member

@rburema rburema commented Jul 1, 2022

This caused randomized infill starts to misbehave, since a 'non-empty' path-vector could be chosen to start that had only empty paths within itself.

This caused randomized infill starts to misbehave, since a 'non-empty' path-vector could be chosen to start that had only empty paths within itself.

CURA-9331
@github-actions
Copy link
Contributor

github-actions bot commented Jul 1, 2022

Unit Test Results

0 tests   0 ✔️  0s ⏱️
0 suites  0 💤
0 files    0

Results for commit 819d09e.

♻️ This comment has been updated with latest results.

@Ghostkeeper
Copy link
Collaborator

It's almost fixed! All models attached to this ticket work now, except for this one:
CCR10S_Alastor__-_Head(2).zip

This is the stack trace:

#0  0x000055555565bef6 in std::vector<cura::ExtrusionJunction, std::allocator<cura::ExtrusionJunction> >::operator[] (this=0x10, __n=0) at /usr/include/c++/9/bits/stl_vector.h:1043
#1  0x000055555564d29d in cura::FffGcodeWriter::processSingleLayerInfill (this=0x5555559d71c8 <cura::FffProcessor::instance+8>, storage=..., gcode_layer=..., mesh=..., extruder_nr=0, mesh_config=..., 
    part=...) at /home/ghostkeeper/Projects/CuraEngine/src/FffGcodeWriter.cpp:1779
#2  0x000055555564a8b1 in cura::FffGcodeWriter::processInfill (this=0x5555559d71c8 <cura::FffProcessor::instance+8>, storage=..., gcode_layer=..., mesh=..., extruder_nr=0, mesh_config=..., part=...)
    at /home/ghostkeeper/Projects/CuraEngine/src/FffGcodeWriter.cpp:1459
#3  0x000055555564a208 in cura::FffGcodeWriter::addMeshPartToGCode (this=0x5555559d71c8 <cura::FffProcessor::instance+8>, storage=..., mesh=..., extruder_nr=0, mesh_config=..., part=..., 
    gcode_layer=...) at /home/ghostkeeper/Projects/CuraEngine/src/FffGcodeWriter.cpp:1433
#4  0x0000555555649a48 in cura::FffGcodeWriter::addMeshLayerToGCode (this=0x5555559d71c8 <cura::FffProcessor::instance+8>, storage=..., mesh=..., extruder_nr=0, mesh_config=..., gcode_layer=...)
    at /home/ghostkeeper/Projects/CuraEngine/src/FffGcodeWriter.cpp:1403
#5  0x0000555555646371 in cura::FffGcodeWriter::processLayer (this=0x5555559d71c8 <cura::FffProcessor::instance+8>, storage=..., layer_nr=..., total_layers=158)
    at /home/ghostkeeper/Projects/CuraEngine/src/FffGcodeWriter.cpp:1032
#6  0x000055555563c390 in cura::FffGcodeWriter::<lambda(int)>::operator()(int) const (__closure=0x7fffffffd298, layer_nr=8) at /home/ghostkeeper/Projects/CuraEngine/src/FffGcodeWriter.cpp:150
#7  0x0000555555659829 in cura::MultipleProducersOrderedConsumer<cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(int)>, cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(cura::LayerPlan*)> >::produce(cura::MultipleProducersOrderedConsumer<cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(int)>, cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(cura::LayerPlan*)> >::lock_t &) (this=0x7fffffffd260, lock=...)
    at /home/ghostkeeper/Projects/CuraEngine/src/utils/ThreadPool.h:308
#8  0x00005555556594f6 in cura::MultipleProducersOrderedConsumer<cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(int)>, cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(cura::LayerPlan*)> >::worker(cura::MultipleProducersOrderedConsumer<cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(int)>, cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(cura::LayerPlan*)> >::lock_t &) (this=0x7fffffffd260, lock=...)
    at /home/ghostkeeper/Projects/CuraEngine/src/utils/ThreadPool.h:348
#9  0x0000555555659312 in cura::MultipleProducersOrderedConsumer<cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(int)>, cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(cura::LayerPlan*)> >::<lambda(cura::MultipleProducersOrderedConsumer<cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(int)>, cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(cura::LayerPlan*)> >::lock_t&)>::operator()(cura::MultipleProducersOrderedConsumer<cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(int)>, cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(cura::LayerPlan*)> >::lock_t &) const (
    this=0x7fffffffd260, th_lock=...) at /home/ghostkeeper/Projects/CuraEngine/src/utils/ThreadPool.h:267
#10 0x0000555555659e44 in std::_Function_handler<void(std::unique_lock<std::mutex>&), cura::MultipleProducersOrderedConsumer<Producer, Consumer>::run(cura::ThreadPool&) [with Producer = cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(int)>; Consumer = cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(cura::LayerPlan*)>]::<lambda(cura::MultipleProducersOrderedConsumer<cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(int)>, cura::FffGcodeWriter::writeGCode(cura::SliceDataStorage&, cura::TimeKeeper&)::<lambda(cura::LayerPlan*)> >::lock_t&)> >::_M_invoke(const std::_Any_data &, std::unique_lock<std::mutex> &) (__functor=..., __args#0=...) at /usr/include/c++/9/bits/std_function.h:300
#11 0x0000555555600ed1 in std::function<void (std::unique_lock<std::mutex>&)>::operator()(std::unique_lock<std::mutex>&) const (this=0x7fffefffecf0, __args#0=...)
    at /usr/include/c++/9/bits/std_function.h:688
#12 0x0000555555600313 in cura::ThreadPool::work_while<cura::ThreadPool::worker()::<lambda()> >(cura::ThreadPool::lock_t &, cura::ThreadPool::<lambda()>) (this=0x5555559fcf00, lock=..., predicate=...)
    at /home/ghostkeeper/Projects/CuraEngine/src/utils/ThreadPool.h:70
#13 0x0000555555600044 in cura::ThreadPool::worker (this=0x5555559fcf00) at /home/ghostkeeper/Projects/CuraEngine/src/utils/ThreadPool.cpp:21
#14 0x0000555555602786 in std::__invoke_impl<void, void (cura::ThreadPool::*)(), cura::ThreadPool*> (
    __f=@0x5555559f8570: (void (cura::ThreadPool::*)(cura::ThreadPool * const)) 0x5555555fffec <cura::ThreadPool::worker()>, __t=@0x5555559f8568: 0x5555559fcf00) at /usr/include/c++/9/bits/invoke.h:73
#15 0x00005555556026b4 in std::__invoke<void (cura::ThreadPool::*)(), cura::ThreadPool*> (
    __fn=@0x5555559f8570: (void (cura::ThreadPool::*)(cura::ThreadPool * const)) 0x5555555fffec <cura::ThreadPool::worker()>) at /usr/include/c++/9/bits/invoke.h:95
#16 0x0000555555602613 in std::thread::_Invoker<std::tuple<void (cura::ThreadPool::*)(), cura::ThreadPool*> >::_M_invoke<0ul, 1ul> (this=0x5555559f8568) at /usr/include/c++/9/thread:244
#17 0x00005555556025ca in std::thread::_Invoker<std::tuple<void (cura::ThreadPool::*)(), cura::ThreadPool*> >::operator() (this=0x5555559f8568) at /usr/include/c++/9/thread:251
#18 0x00005555556025aa in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (cura::ThreadPool::*)(), cura::ThreadPool*> > >::_M_run (this=0x5555559f8560) at /usr/include/c++/9/thread:195
#19 0x00007ffff7694de4 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#20 0x00007ffff77a8609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#21 0x00007ffff737f133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

From the code in question:

            else //So walls_generated must be true.
            {
                std::vector<VariableWidthLines>* start_paths = &wall_tool_paths[rand() % wall_tool_paths.size()];
                while(start_paths->empty()) //We know for sure (because walls_generated) that one of them is not empty. So randomise until we hit it. Should almost always be very quick.
                {
                    start_paths = &wall_tool_paths[rand() % wall_tool_paths.size()];
                }
                near_start_location = (*start_paths)[0][0].junctions[0].p;
            }

The issue seems to be with this:

(gdb) print (*start_paths)[0]
$1 = std::vector of length 0, capacity 0

I think I have a solution for this, so I'll commit that shortly.

Maybe we should find the first non-empty path? Would become a lot more complex then though.

Contributes to issue CURA-9331.
@rburema rburema merged commit 5bd6636 into 5.1 Jul 5, 2022
@rburema rburema deleted the CURA-9331_fix_randomize_start_on_empty branch July 5, 2022 14:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants