Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zoom value? #307

Closed
philnewman opened this issue Jun 26, 2018 · 8 comments
Closed

Zoom value? #307

philnewman opened this issue Jun 26, 2018 · 8 comments

Comments

@philnewman
Copy link

Is there a way to set a default zoom value? I've not been able to adjust this - except manually.

@ericblade
Copy link
Collaborator

ericblade commented Jun 27, 2018

haven't actually tested this, but in your init callback, you should be able to do something like:

const track = Quagga.CameraAccess.getActiveTrack();
track.applyConstraints({advanced: [{zoom: parseFloat(value)}]});

taken from https://github.com/serratus/quaggaJS/blob/eff0c5ad12b648e5ec938ce13b4cdc175effc5bb/example/live_w_locator.js

You could probably also supply that as a constraint in the call to Quagga.init. Please let us know how that works for you :)

@philnewman
Copy link
Author

I've tried exactly that code and attempted:

const track = Quagga.CameraAccess.getActiveTrack();
track.applyConstraints({advanced: [{zoom: capabilities.zoom.max}]});

Still no luck - always opens without a zoom that must be manually adjusted.

@ericblade
Copy link
Collaborator

But your manual adjustment uses exactly the same code?

@philnewman
Copy link
Author

philnewman commented Jun 29, 2018

Yes - but ideally, I'd like to not have to have the manual adjustment. I'd like to start with max zoom from launch. Here's what i've got so far (w/o manual adjustment):

<html>
  
<head></head>
  <body>
    
    <div id="modal" title="Barcode scanner">
        <span class="found"></span>
        <div id="interactive" class="viewport"></div>
    </div>
    
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
    <script src="quagga.min.js"></script>
  <script>
$(function() {
var App = {
        init : function() {
            Quagga.init(this.state, function(err) {
                if (err) {
                    console.log(err);
                    return;
                }
                App.attachListeners();
                App.checkCapabilities();
                Quagga.start();
            });
        },
        checkCapabilities: function() {
            var track = Quagga.CameraAccess.getActiveTrack();
            var capabilities = {};
            if (typeof track.getCapabilities === 'function') {
                capabilities = track.getCapabilities();
            }
            //this.applySettingsVisibility('zoom', capabilities.zoom.max);
            this.applyConstraints({advanced: [{zoom: capabilities.zoom.max}]});
            this.applySettingsVisibility('torch', capabilities.torch);
        },
        updateOptionsForMediaRange: function(node, range) {
            console.log('updateOptionsForMediaRange', node, range);
            var NUM_STEPS = 6;
            var stepSize = (range.max - range.min) / NUM_STEPS;
            var option;
            var value;
            while (node.firstChild) {
                node.removeChild(node.firstChild);
            }
            for (var i = 0; i <= NUM_STEPS; i++) {
                value = range.min + (stepSize * i);
                option = document.createElement('option');
                option.value = value;
                option.innerHTML = value;
                node.appendChild(option);
            }
        },
        applySettingsVisibility: function(setting, capability) {
            if (typeof capability === 'boolean') {
                var node = document.querySelector('input[name="settings_' + setting + '"]');
                if (node) {
                    node.parentNode.style.display = capability ? 'block' : 'none';
                }
                return;
            }
            if (window.MediaSettingsRange && capability instanceof window.MediaSettingsRange) {
                var node = document.querySelector('select[name="settings_' + setting + '"]');
                if (node) {
                    this.updateOptionsForMediaRange(node, capability);
                    node.parentNode.style.display = 'block';
                }
                return;
            }
        },
        initCameraSelection: function(){
            var streamLabel = Quagga.CameraAccess.getActiveStreamLabel();

            return Quagga.CameraAccess.enumerateVideoDevices()
            .then(function(devices) {
                function pruneText(text) {
                    return text.length > 30 ? text.substr(0, 30) : text;
                }
                var $deviceSelection = document.getElementById("deviceSelection");
                while ($deviceSelection.firstChild) {
                    $deviceSelection.removeChild($deviceSelection.firstChild);
                }
                devices.forEach(function(device) {
                    var $option = document.createElement("option");
                    $option.value = device.deviceId || device.id;
                    $option.appendChild(document.createTextNode(pruneText(device.label || device.deviceId || device.id)));
                    $option.selected = streamLabel === device.label;
                    $deviceSelection.appendChild($option);
                });
            });
        },
        attachListeners: function() {
            var self = this;

            self.initCameraSelection();
            $(".controls").on("click", "button.stop", function(e) {
                e.preventDefault();
                Quagga.stop();
            });

            $(".controls .reader-config-group").on("change", "input, select", function(e) {
                e.preventDefault();
                var $target = $(e.target),
                    value = $target.attr("type") === "checkbox" ? $target.prop("checked") : $target.val(),
                    name = $target.attr("name"),
                    state = self._convertNameToState(name);

                console.log("Value of "+ state + " changed to " + value);
                self.setState(state, value);
            });
        },
        _accessByPath: function(obj, path, val) {
            var parts = path.split('.'),
                depth = parts.length,
                setter = (typeof val !== "undefined") ? true : false;

            return parts.reduce(function(o, key, i) {
                if (setter && (i + 1) === depth) {
                    if (typeof o[key] === "object" && typeof val === "object") {
                        Object.assign(o[key], val);
                    } else {
                        o[key] = val;
                    }
                }
                return key in o ? o[key] : {};
            }, obj);
        },
        _convertNameToState: function(name) {
            return name.replace("_", ".").split("-").reduce(function(result, value) {
                return result + value.charAt(0).toUpperCase() + value.substring(1);
            });
        },
        detachListeners: function() {
            $(".controls").off("click", "button.stop");
            $(".controls .reader-config-group").off("change", "input, select");
        },
        applySetting: function(setting, value) {
            var track = Quagga.CameraAccess.getActiveTrack();
            if (track && typeof track.getCapabilities === 'function') {
                switch (setting) {
                case 'zoom':
                    return track.applyConstraints({advanced: [{zoom: parseFloat(value)}]});
                case 'torch':
                    return track.applyConstraints({advanced: [{torch: !!value}]});
                }
            }
        },
        setState: function(path, value) {
            var self = this;

            if (typeof self._accessByPath(self.inputMapper, path) === "function") {
                value = self._accessByPath(self.inputMapper, path)(value);
            }

            if (path.startsWith('settings.')) {
                var setting = path.substring(9);
                return self.applySetting(setting, value);
            }
            self._accessByPath(self.state, path, value);

            console.log(JSON.stringify(self.state));
            App.detachListeners();
            Quagga.stop();
            App.init();
        },
        inputMapper: {
            inputStream: {
                constraints: function(value){
                    if (/^(\d+)x(\d+)$/.test(value)) {
                        var values = value.split('x');
                        return {
                            width: {min: parseInt(values[0])},
                            height: {min: parseInt(values[1])}
                        };
                    }
                    return {
                        deviceId: value
                    };
                }
            },
            numOfWorkers: function(value) {
                return parseInt(value);
            },
            decoder: {
                readers: function(value) {
                    if (value === 'ean_extended') {
                        return [{
                            format: "ean_reader",
                            config: {
                                supplements: [
                                    'ean_5_reader', 'ean_2_reader'
                                ]
                            }
                        }];
                    }
                    return [{
                        format: value + "_reader",
                        config: {}
                    }];
                }
            }
        },
        state: {
            inputStream: {
                type : "LiveStream",
                constraints: {
                    width: {min: 640},
                    height: {min: 480},
                    aspectRatio: {min: 1, max: 2},
                    facingMode: "environment" // or user
                }
            },
            locator: {
                patchSize: "medium",
                halfSample: true
            },
            numOfWorkers: 2,
            frequency: 10,
            decoder: {
                readers : [{
                    format: "code_128_reader",
                    config: {}
                }]
            },
            locate: true
        },
        lastResult : null
    };

    App.init();
  
    Quagga.onDetected(function(result) {
        var code = result.codeResult.code;
        Quagga.stop();
        window.location.href="scannerview.php?barcode=" + code;
    });
});
 </script>
</html>

// edited Oct-9-2020 to set max aspectRatio request to 2, not 100, to prevent people from using that value in the future, as it blows up iOS 14 - Eric Blade

@ericblade
Copy link
Collaborator

I'd like to see if doing it as I had said makes it work with my app and my nexus 5, but I've been incredibly busy with dealing with employment contracts the last couple of days :) My first thought, is that I wonder if maybe for some reason, there's some kind of time delay that needs to pass between the init callback, and before the device can accept a zoom command.. my second thought, is i wonder if there's something in the rules that says the camera can only perform commands if they are accompanied by user interaction, which is now a rule for playing sound in Chrome, so i could see it being a rule in other areas, too.. hmm.

@philnewman
Copy link
Author

Got it! Added const track = Quagga.CameraAccess.getActiveTrack(); sleep(5000); var capabilities = track.getCapabilities(); track.applyConstraints({ advanced: [{zoom: capabilities.zoom.max}]}).catch(e => console.log(e));

after Quagga.start();

This code pen: https://codepen.io/serratus/pen/zzxaOL and blog post: https://www.oberhofer.co/mediastreamtrack-and-its-capabilities/ were extremely helpful!

@ericblade
Copy link
Collaborator

so, a slight delay after start, before the camera is able to accept the zoom. I hate having to do workarounds that involve delays like that, but... yeah. i've had to ship things like that before, too. :-S

@ericblade
Copy link
Collaborator

.... seems that I need to get some better understanding of how this code all works, but it looks like you should be able to init() .. sleep() .. applyConstraints() .. then start() .. the sleep() part is what bothers me, as the init callback looks like it should be called after everything is up and ready. At least, the callback is called once CameraAccess responds to a request() call.. so.. i'm confused. Might just be one of those things . . . on the other hand, i wonder if there's a specific location that a callback could be added to notify when the camera is ready to zoom.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants