Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

started #13

Closed
cx4 opened this issue Jun 15, 2024 · 12 comments
Closed

started #13

cx4 opened this issue Jun 15, 2024 · 12 comments

Comments

@cx4
Copy link

cx4 commented Jun 15, 2024

I don't understand how to get started quickly. Could you provide a simple example?

@xhluca
Copy link
Contributor

xhluca commented Jun 16, 2024

Check out this PR, with new added feature (you can install the pre-release from pypi): #12

@tianwang2021
Copy link

tianwang2021 commented Jun 18, 2024

您好,我按照您给出的例子,运行python examples/complete/run_all.py
无法找到这个安装包webllama_experimental
1

@xhluca
Copy link
Contributor

xhluca commented Jun 18, 2024

@tianwang2021 should be fixed now, i just pushed a fix.

@xhluca
Copy link
Contributor

xhluca commented Jun 18, 2024

You might also want to consider updating the image if you want to keep your IP address private.

@tianwang2021
Copy link

好的谢谢您

@cx4
Copy link
Author

cx4 commented Jun 25, 2024

@tianwang2021 should be fixed now, i just pushed a fix.

I tried to run it on June 25th , and still didn't work.

@xhluca
Copy link
Contributor

xhluca commented Jun 25, 2024

@cx4 Seems I missed a few. I've made another change on this commit: cef6b96

Can you pull from main again and try?

@cx4
Copy link
Author

cx4 commented Jun 26, 2024

@cx4 Seems I missed a few. I've made another change on this commit: cef6b96

Can you pull from main again and try?

@xhluca Line 15th of th run_all.py file mentions the tests directory. what files should I put in ?

@xhluca
Copy link
Contributor

xhluca commented Jun 26, 2024

@cx4
Copy link
Author

cx4 commented Jun 26, 2024

https://github.com/McGill-NLP/webllama/blob/main/docs%2FREADME.md

Thank you for your answer. Maybe my comprehension is not good enough. It bothers me to go from one readme file to another readme file. Could you provide a simpler Tutorial?

@cx4
Copy link
Author

cx4 commented Jun 26, 2024

https://github.com/McGill-NLP/webllama/blob/main/docs%2FREADME.md

    from sentence_transformers import SentenceTransformer
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\sentence_transformers\__init__.py", line 15, in <module>
    from sentence_transformers.trainer import SentenceTransformerTrainer
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\sentence_transformers\trainer.py", line 10, in <module>
    from transformers import EvalPrediction, PreTrainedTokenizerBase, Trainer, TrainerCallback
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\transformers\utils\import_utils.py", line 1525, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\transformers\utils\import_utils.py", line 1535, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "D:\Python\Lib\importlib\__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\transformers\trainer.py", line 71, in <module>
    from .optimization import Adafactor, get_scheduler
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\transformers\optimization.py", line 27, in <module>
    from .trainer_pt_utils import LayerWiseDummyOptimizer, LayerWiseDummyScheduler
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\transformers\trainer_pt_utils.py", line 235, in <module>
    device: Optional[torch.device] = torch.device("cuda"),
G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\transformers\trainer_pt_utils.py:235: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.)
  device: Optional[torch.device] = torch.device("cuda"),
Traceback (most recent call last):
  File "G:\free_project\webllama\webllama-main\examples\complete\run_all.py", line 16, in <module>
    replay = wl.Replay.from_demonstration(demos[0])
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\weblinx\__init__.py", line 1093, in from_demonstration
    replay = demonstration.replay
             ^^^^^^^^^^^^^^^^^^^^
  File "D:\Python\Lib\functools.py", line 995, in __get__
    val = self.func(instance)
          ^^^^^^^^^^^^^^^^^^^
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\weblinx\__init__.py", line 98, in replay
    return self.load_json("replay.json")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\weblinx\__init__.py", line 176, in load_json
    results = utils.auto_read_json(self.path / filename, backend=backend, encoding=encoding)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\free_project\webllama\webllama-iuB1fxj9\Lib\site-packages\weblinx\utils\__init__.py", line 196, in auto_read_json
    data = json.load(f)
           ^^^^^^^^^^^^
  File "D:\Python\Lib\json\__init__.py", line 293, in load
    return loads(fp.read(),
                 ^^^^^^^^^
UnicodeDecodeError: 'gbk' codec can't decode byte 0x99 in position 508322: illegal multibyte sequence```

@xhluca
Copy link
Contributor

xhluca commented Jun 26, 2024

It seems you are using windows, which we do not recommend as we have not tested on this OS. Feel free to use WSL, Mac or Ubuntu.

@xhluca xhluca closed this as completed Jul 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants