Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would like unique set of mute groups per screen (1024 instead of 32) #4

Open
TDeagan opened this issue Dec 21, 2015 · 9 comments
Open

Comments

@TDeagan
Copy link
Contributor

TDeagan commented Dec 21, 2015

The current mute group model is difficult to use and understand. It also is severely limiting as a performance tool compared to the rest of the application. Expanding the number of mute groups in a usable manner is one way to increase the utility of this highly desirable feature.

I would like to use a set of 32 mute group keys (the same set of mute group keys as is currently used,) to learn and activate a different set of mute groups per screen. The current mute group model allows 32 mute groups which are the same regardless of the screen you're on (which is the subject of pull request #3).

It would be a nightmare to try and have 1024 unique mute group keys to remember. But if each screen could have a different set using the same activation keys, then the user would not be forced to organize the tracks exactly the same on each screen. Since different screens might represent different 'songs' for a live performance, the various sequences could differ in number and arrangement from song to song. The current mute group model (which I think is broken and the pull request above would fix,) is too limited to allow the flexibility of different songs.

I am working through the code in the perform class to get a deep understanding of where and how to convert the mute group identifier to one that is based on the same screen_offset * seqs_in_set model used in many other places. Once I get a working candidate, I'll create a pull request.

I expect that this will modify the mute group learn and activation functions but not the functions involved with storing and modifying the key definitions.

@TDeagan
Copy link
Contributor Author

TDeagan commented Dec 21, 2015

In case my concept wasn't clear, I'm not considering mute groups that cross screen boundaries. In other words, a mute group (in the model I'm proposing,) would only consist of tracks on a given screen. All tracks on all other screens would be turned off.

32 mute groups whose members could be on any screen is another potential model. But I believe that it would be a mess to try and stay aware of tracks on 32 different screens. As a live performance tool, having each screen be self contained is a useful paradigm (IMHO.)

@ahlstromcj
Copy link
Owner

All are good thoughts. One thing I've done is to make the new feature
variant an option: build, macro, command-line, rc file, or usr file, as
appropriate. Also work in small increments, testing often, because the app
is a bit complex.
Anyway, I'm kind of 'on vacation' for awhile, and will attend to your pull
request eventually. Thanks!
On Dec 21, 2015 11:19 AM, "Tim Deagan" [email protected] wrote:

In case my concept wasn't clear, I'm not considering mute groups that
cross screen boundaries. In other words, a mute group (in the model I'm
proposing,) would only consist of tracks on a given screen. All tracks on
all other screens would be turned off.

32 mute groups whose members could be on any screen is another potential
model. But I believe that it would be a mess to try and stay aware of
tracks on 32 different screens. As a live performance tool, having each
screen be self contained is a useful paradigm (IMHO.)


Reply to this email directly or view it on GitHub
#4 (comment)
.

@TDeagan
Copy link
Contributor Author

TDeagan commented Dec 22, 2015

Thanks, no hurries on my part to get stuff merged. I'm just trying to knock out as much as I can while I have some free time on vacation. It's just fine if it takes time and consideration before they get pulled into the master. Without an automated test harness for regression, slow is a good way to proceed.

Luckily, the things I'm interested in seem to be constrained to a couple of functions in perform.cpp. I'll start spending some cycles to determine how to do the feature variant stuff.

A closer look indicates that the requisite arrays will need 32,768 entries to provide unique mute groups per key per page. 32 mute keys * 32 screens * 32 sequence states

The arrays don't look to be bound for the most part, but I'm doing a survey of all uses to figure out where the appropriate e

@arnaud-jacquemin
Copy link
Contributor

Hello Tim,

I'm interested by your proposal. I'm thinking about using each screen set for a different song, and each mute group for a part of a song (verse, bridge, chorus, break for instance). So I thought I could create 32 different mute groups for each screen set.

But it doesn't seem to work that way on the version I'm using (2016-07-14 0.9.16-0-g0193728 * master). If I "learn" a mute group on screen set 1, it will be the same on every screen set. If I change it on the screen set 2, it will be modified on the other screen sets also. It's quite confusing...

Did you achieve implementing this change ?

Thanks in advance

(English is not my mother tongue, please excuse any errors on my part... Nonetheless I hope I'm clear enough!)

@ahlstromcj
Copy link
Owner

I actually tried to get a feature working where, when you switch from one screen-set to another, the first screen-set would be muted (all patterns in that screen-set would be muted), while the second screen-set would then be unmuted (so all patterns on that screenset would play). Actually, what I intended was unqueuing on the first screens, and queuing on the second. I called it "auto-queuing".

I tried like a son-of-a-bitch to get it working, but gave up after a few days of frustration. At some point I will take up the issue again and see if time has made me any smarter. In the meantime, getting the mutes for each screen-set to be preserved ought to be more do-able.

Thanks for the interest!

@TDeagan
Copy link
Contributor Author

TDeagan commented Aug 14, 2016

Very sadly I abandoned this effort. I was using sequencer64 with a
touchscreen raspberry pi, but after the third time that the rpi decided to
corrupt the SD card, for which there is no recovery unless you're
practicing an elaborate imaging strategy, I gave up and went back to
ableton on a laptop.

I lost a lot of source code during the last crash. I should have been
backing up better, and could probably recover from my last sets of backups,
but I'd have to rebuild the environment for the 4th time. My images don't
seem to work.

The inability to use a rpi in a production setting frustrates the heck out
of me. Sequencer64 was exactly the right set of features for what I wanted
to do. Buy if I'm going to drag a laptop to gigs, I might as well use the
painfully expensive tool I bought.

Very sorry,
--tim

On Sun, Aug 14, 2016, 6:34 PM C. Ahlstrom [email protected] wrote:

I actually tried to get a feature working where, when you switch from one
screen-set to another, the first screen-set would be muted (all patterns in
that screen-set would be muted), while the second screen-set would then be
unmuted (so all patterns on that screenset would play). Actually, what I
intended was unqueuing on the first screens, and queuing on the second. I
called it "auto-queuing".

I tried like a son-of-a-bitch to get it working, but gave up after a few
days of frustration. At some point I will take up the issue again and see
if time has made me any smarter. In the meantime, getting the mutes for
each screen-set to be preserved ought to be more do-able.

Thanks for the interest!


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#4 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AExgiMYeDvG2aKbAKTazfmgoz4j-AQSiks5qf6X4gaJpZM4G5Rxd
.

@arnaud-jacquemin
Copy link
Contributor

Hello Chris and Tim, thank you for your answers !

Chris, your "auto-queuing" feature is interesting, it could be useful in some cases. But it should be optional, because some users would also want to "mix and match" patterns from different screen sets, a bit like a techno DJ mix.

Tim, I'm sorry to read that you had such a bad experience with your Raspberry Pi, and that you lost all your work...

So bad my C++ skills are almost nonexistent, I would really like to contribute to the "32 mute-groups for each screen set" feature. I think it would be as good as the "scene" concept in Ableton Live.

@georgkrause
Copy link

@arnaud-jacquemin i think here is a missunderstanding, @TDeagan wanted to implement a feature that the mute groups toggle the same pattern in each screen. you define a mute group and when you switch the screen, you can use the same mute group for the patterns at the same place but the second screen ;)

@georgkrause
Copy link

my comment was wrong, after reading over it again. sorry :x

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants