-
-
Notifications
You must be signed in to change notification settings - Fork 10
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reverb EFX sound API #1367
Comments
I wouldn't recommend using Xact at this point. #1479 I don't see it possible to provide a cross-platform solution without doing our own DSP. I suppose end users would like something simpler, like setting at echo delay. And the industry is moving into real time 'Rendering' audio from spatial geometry, instead of fine tuning filters, Perhaps an alternative would be to add a convolution parameter to the SoundEffectProcessor, *[1] Https://github.com/kniEngine/kni/blob/e2c630cfd0f7499f79fc67be2a4e52a668714767/MonoGame.Framework/Audio/OpenAL/ConcreteAudioService.cs#L307 |
*I wouldn't recommend using Xact at this point. #1479 look good just to document the units.. ( feet ,meters ,ms ) etc.. [1] Looks fine.. they are adjusting thte vaules to try to match the effects. I mean, if it works at all, it's great. in 1 i think it changes hte OpenAL settings to match the DX ones, thats great.. if tis not implemnted put //TODO i doubt anyone will use that.. // Dont know what to do with these EFX has no mapping for them. Just ignore for now re: They're definitely good enough. I mean, if they work at all, they're good enough. If they work on platform, they're fine.Right? And um, the other important thing is there's some code that makes the open al version sound more like the other one. So I would only expose the one that silk exposes for windows. And then kind of like use that as the canonical one. And then if they're implementation is the open al just adjust the expose the XNA one. and then use its values to adjust the other one. I think that's how it was done in the code, something like that. That way you have one, you know, 1 set of parameters for all the platforms So the key to that. may be setting up the original. the bare minimum structures that that file format gets loaded into and make sure that that is utilized and applied. So it's is XAcT is on. and you know that when you create more than one sound effect from one sound effect. I mean Oh, one sound effect instance from a sound effect that it will make a chain of voices or whatever. Because if that works, then. I would document carefully the limits of every parameter and what the meanings are. Milliseconds meters per square for the room size. I would just start to see if room size works and that's it. that F enough of them work, I think that's the way to go. To finish that. And there are three pieces would be the FNA. Maybe the other one would be looking at D. Wave, but that really is kind of like starting over. And maybe D wave doesn't doesn't support everything as well, because the. monogame one is old and it goes all the way deep into the drivers when it can. And there are dsps in those drivers and so on in some. Or the Qualcomm ones might, it might go right to the dsp. I don't know how they're going to implement it. It's meant to be very low power and the whole hexagon chip is DSP. All their AI is DSP. So everyone moved to a cpu's because they thought they could afford the 64 bit floating point and all that. But you know, when I worked in that field, everybody used the fixed point. Because it was a more. discrete. I don't know why everybody's saying that it's a more difficult, but everybody you see and they use that. It's definitely uses less power. [3]And you've added more platforms, including some web platforms, yes, but I don't think that convolver node or your web API can replace the driver code. So that's not going to replace it. But yeah, it's basically the same convolver node is how you would implement those filters. So that all gets done under the hood. I'm too tired. Call later if you want. Nwave and I mean, if you don't want to do it at all, I can, like, maybe when I rest, I can actually just do this. But I don't need it right now. I still have to do the quantum gravity project. It's more important. I just thought that since you're working on it and I'd already spent time on it and I have three years of experience working in an EQ lab in LA with the THX guy and the USC lab, so I might know a little bit about it. But we use DSPs. We used a fixed point and that was the industry standard. At least trying it on Windows would be worth it. The reason I'm telling you this is the reason for me to try it to set it all up. It will take about 4 hours I mean, for you it might take like 10 minutes. So I mean, if you look under a reverb and silk, you're going to find the presets. If you play a sound and apply it, it should change it. That's what I would expect. And then if you traced it, make sure the slots are being used or whatever. That would be something to know. I would just serialize one of those things those are part of my sound effects. So every object can have an emitter and that has a little grip. And I drag a file name, stick it on an object, and that's its position. If that object spins, it's going to make a lot of different sounds as it spins because of the Doppler effect and if it hits things, it's going to have an impulse response normal, and it's going to have a frictional one. So all my sounds are very driven by the physics. So I have been doing sound tools and I don't like using separate tools. I even used to import graphics with SVG. Then if I would change the shape or the morphology of a creature, I would have to re-import all the graphics. So I'm going to make all the tooling done in Avalonia everything so that I don't have to import files and export them. So yeah. To completely go against the file format and you know I'm not doing music composition in that tool. Maybe if I were to do a lot of music stuff, I don't need to do that in the game. Because it's not really that interactive. So I would use some other tool, wherever, but this is what I would want. This would complete pretty much what MonoGame needs is that not having to create sounds from physics or that's a compressible fluid solver that's like shock waves. That's way more complicated than anything that I'm simulating and I'm doing, you know, so it's overkill. NASA did this like 10-15 years ago. So this is kind of like this, you know, it's not minimal, but it's great. It's all you need. Just put a sample of a sound and then add a little bit of reverb to make it fit in the context of the. with data members or whatever and then reload it. I would to set up a custom editor. I would have a file watcher every sound file that of a certain type goes into that folder gets shown up. I drag and drop that file onto an object like a circle and then I see an emitter, right? And then I click on that, and I see a property sheet. And then I turn, you know, and it starts making a noise by default. Then I shut it off. Just so it does something, I always have things do something by default so you don't have to like wonder what's going on. You're not going to go look for the turning on button. It's going to make a noise and then you can hit the master volume if you hate it. So. Then you click on that emitter, and you see all the parameters that have to do with that sound effect. Can you add reverb? And then expand that out, you know, or I might have another list box that has a bunch of canned reverb files perhaps and. I guess you could use the original tool as a little bit of a model, but it is a mess, you know, with the banks and stuff. I think it's really hard to figure out but yeah, I would show an expandable I. I would have a reverb property on the sound effect. And um or. Okay, the sound's going to have a file name. And then I can adjust some parameters on that sound, pan, zoom. What I do is I add factors like how? What is the X factor of PAN related to where the guy is? This is just my custom tool. Yeah, it's off-topic. I'm giving you a use case but. Um, everybody's got there with who's making an original game. You cannot like big unity and say all, you know, they're, it's all gonna be 3D and it's. And I don't want to do 3D if I'm doing a 2D game, I'm not interviewing you zero because. anyways, I've already done it. All I have to do is add reverb. So how do I start? I want to start with something that sounds OK, so I don't want to start setting every property. A bunch of static reverb objects. And those could be like, um, either drop down, uh, um. You know, in a combo box that sets it to that and then lets you tweak the individual parameters. Or they could be dragged from another box of effects into that sound effect. I don't know exactly how the UI would work. But what's important is I get to try to get Avalonia to get MonoGame NKI, hey, ChatGPT, bind a bunch of sliders in a grid to this structure using unod and the mix max vaules in the metadata. I would, I would put strict, you know, going making sure that it works, you know, with any one parameter at real time and it doesn't stutter and garbage collection doesn't ruin it or whatever. His audio can be pretty quick. You can pick your speakers if you have. GC kicking in too many going on at once. It's quite intensive, especially. reverb. So it's really not that bad. I mean it. If you have thousands of voices, I'm probably going to have one gun shooting in a room. Design me that crazy and a few other noises, maybe 10 voices at once. So what I don't understand is the voices so, but that's right there in the code. So I think it used to work because somebody put code in there that makes the OpenAL sound like the other one. Good morning, I guess. |
I 'll drop this here, for future reference. |
thanks, wow i implemented the blocking using a rough sparse ray caster, I found both iaudio2 and openAl extension s in silk.. they dont make the effort to factor commonality. i think many of the params are explained could expose it somehow , or make a superset and document what is special to what platform in the xml comments. i tested windowing on android , windows, a while back. its not that great. anyway as far as c# code to wrap the platform implementation, its either been generated and committed or hand rolled, the comments are there.... this is 3 years old commit . no code gen is part of the build. the commit is massive though. https://github.com/dotnet/Silk.NET/blob/main/src/OpenAL/Extensions/Silk.NET.OpenAL.Extensions.Creative/EffectExtension.cs#L16 the tests are generated it might work. i looked for a new XACT tool, i didnt see that. in NWAVE there is active development. also a feature request to allow for rigging and testing and tuning. not sure if i get time to try it soon ..it might take a few hours tho ,if i try i'll mention it first. ill would first try with silk and the other and see how hard it would be to put those bindings in to the this mg branch. as for the "sound blocking in" the OpenAL extension pdf and the advice.. about using reverb thats how i hoped to use it . i implented it with rays. very rought but good enough. Tooling : sound folder. file watcher, drag drop emitter, setproperties via sheet to model view. bind slider and presets ( all in silk) via chatgp generted Avalonia code, then audition and play around with it. chatgpt even wrote me a hyprid fx + cpu cutter , and ray caster. to measure interiors. the physics rays are a bit expensive to measure rooms. even in parallel. i coulnt d believe how good chatgpt code gen it is ( sometimes) xna fx, and pixel shaders are old so it can do all that.. it encoded a map from the hit points, to the cpu ray object handles in the texture. margin as a dictionary. the XACT pipe in MG is broken. noone has updated that i think im going to quick and dirty , or ill simply never get gone. i do hope to get some kind of reverb in this thing, its needs a refresh. i might have the Avalonia test rig with docking running but its not up to sync with the very latest avalonia and theres quite a few, and scripting in rosylinpad is bad.., as for tooling the avalonia effort in stride has stalled... its too hard. for non gurus but for simple tests or ai widgets like mabye graphics nodes, its good. i cant promise anything i way too many projects. i am fried. for 16 months on emerging tech. example today i learn Los Alamos has a 1 billion neuron soliton spike signal , 2d neuromophic ai, some do physics, tsi 100x fssster and 2400watts for human leg.. each chip is 30 watts. just wnt to finish what i started... i do like programing just by telling the computer want to do in plain natural language is ideal for me. its really hard to focus. as the stuff i want or though was decades away is coming in an really seems practical and truly 100x faster and in line with topological physics and use of the time dimension for something other than heat. i could break this up but its relaaed to rigging basic feaures in a visual way. Tests and samples and so great. Pico voice uses a tiny Avalonia box and thats whta i use for an assistant ChatGp3 ( clippy) thats in a broken staate.. i gotta send this with typos srorry is broken thsi was my surf day and gonna be a night surf i guess. |
couple things.. Silk updated the openGL3 openALsoft and other bindings. very recently OpenAL doesnt support android seems. summary ias i think using OpenAL soft and trying on Windows vis Xaudio2, sorting out voices, querying the device. failing gracefully and then trying windows via OpenGL then Android via OpenALSoft.. if that's not a big deal. to move from openALSoft to OpenAL.. there are some ifdef Android here and there.. i dont believe the 3d Listener has to be used. hopefully the ARM windows ( qualcom has lost of dsps) hopefully that will be ok with xaudio2 as there are some newer apis.. someone mentioned they just go directly to the android api, jdk. but that might not from c# be easy. its in the ndk.. it probably accessibility its silk building and packaging. seems to be the trouble and why its not going to be fixed. android.media.audiofx.AudioEffect this is the java version. some jut went directly to that and skipped silk.. /NDK https://github.com/search?q=repo%3Akcat%2Fopenal-soft+reverb&type=code there's an IAudioCllient in windows now api now. windows silk has cared about the windows on ARM and that has tons of DSP. FIRs or special sound with headphones can take a lot for resources, if in software. i did get Freeverb to work ok.( software version , very basic) i see NWave and quite a few software implementations but with 8 cores and the GC and such its still probably a bad idea. so i might get to it... at least try it.. chatgpt advice: on windows: im still looking at using Avalonia and maybe some scripting to make a sort of basic test rig. with docking. its hosting the control that copies Mg to a writeable bitmap rather can sharing the buffer, via Avalonia.Inside.. that might not be the best way. Avalonia isnt good to deploy especially on mobile , i use a another launcher and basic hand coded ui for that . |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
What version of MonoGame does the bug occur on:
What operating system are you using:
What MonoGame platform are you using:
The text was updated successfully, but these errors were encountered: