Skip to content

Commit

Permalink
Merge pull request #34 from wtq2255/master
Browse files Browse the repository at this point in the history
update v0.1.8
  • Loading branch information
wtq2255 authored Feb 28, 2024
2 parents 51a5314 + d87ef69 commit 4d91906
Show file tree
Hide file tree
Showing 44 changed files with 2,090 additions and 164 deletions.
2 changes: 1 addition & 1 deletion LICENSE.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## MIT License

Copyright (c) 2021--2023, `audioFlux` development team.
Copyright (c) 2021--2024, `audioFlux` development team.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
18 changes: 7 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,6 @@

# audioFlux

<!--<p align="center">
<img src='./image/logo.png' width="380" style="max-width: 100%;" >
</p>-->


<!--
[![Pypi Downloads](https://img.shields.io/pypi/dm/aubio.svg?label=Pypi%20downloads)](https://pypi.org/project/aubio/)
[![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/aubio.svg?label=Conda%20downloads)](https://anaconda.org/conda-forge/aubio)
[![Documentation](https://readthedocs.org/projects/aubio/badge/?version=latest)](http://aubio.readthedocs.io/en/latest/?badge=latest "Latest documentation") -->
<!--![GitHub code size in bytes](https://img.shields.io/github/languages/code-size/libAudioFlux/audioFlux)-->

<!-- shields.io -->
![GitHub Workflow Status (with branch)](https://img.shields.io/github/actions/workflow/status/libAudioFlux/audioFlux/build.yml?branch=master)
![example branch parameter](https://github.com/libAudioFlux/audioFlux/actions/workflows/build.yml/badge.svg?branch=master)
Expand All @@ -39,6 +28,13 @@ field such as Classification, Separation, Music Information Retrieval(MIR) and A
- High performance, core part C implementation, FFT hardware acceleration based on different platforms, convenient for large-scale data feature extraction.
- It supports the mobile end and meets the real-time calculation of audio stream at the mobile end. -->


##### New Features
* v0.1.8
* Add a variety of Pitch algorithms: `YIN`, `CEP`, `PEF`, `NCF`, `HPS`, `LHS`, `STFT` and `FFP`.
* Add `PitchShift` and `TimeStretch` algorithms.


### Table of Contents

- [Overview](#overview)
Expand Down
2 changes: 1 addition & 1 deletion conda/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "0.1.7" %}
{% set version = "0.1.8" %}

package:
name: audioflux
Expand Down
9 changes: 7 additions & 2 deletions docs/_static/versions.json
Original file line number Diff line number Diff line change
@@ -1,9 +1,14 @@
[
{
"name": "0.1.7 (stable)",
"version": "0.1.7",
"name": "0.1.8 (stable)",
"version": "0.1.8",
"url": "https://audioflux.top/stable/"
},
{
"name": "0.1.7",
"version": "0.1.7",
"url": "https://audioflux.top/v0.1.7/"
},
{
"name": "0.1.6",
"version": "0.1.6",
Expand Down
13 changes: 12 additions & 1 deletion docs/changelog.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,20 @@
ChangeLog
=========
v0.1.8
------
* New features:
* Add `audioflux.STFT` algorithm.
* Add `audioflux.PitchFFP` algorithm.
* Add `audioflux.PitchShift` and `audioflux.TimeStretch` algorithms.
* Add method `temporal` to `audioflux.Temporal`.
* Optimize performance.
* Fix bug:
* Fix `nmf` bug.

v0.1.7
------
* New features:
* Add Cepstrogram algorithm.
* Add `audioflux.Cepstrogram` algorithm.
* Add PitchCEP/PitchHPS/PitchLHS/PitchNCF/PitchPEF/PitchSTFT/PitchYIN algorithm.
* Modified API:
* `audioflux.display.fill_wave` add `times` params.
Expand Down
9 changes: 5 additions & 4 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -79,10 +79,10 @@ Linux build
# For CentOS:
$ sudo yum install -y cmake clang

2. Installing **MKL** lib dependencies on Linux
2. Installing **MKL** lib dependencies on Linux (Optional, Recommended installation)

You can
use `this installation document <https://www.intel.cn/content/www/cn/zh/developer/tools/oneapi/onemkl-download.html?operatingsystem=linux>`_
use `this installation document <https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html?operatingsystem=linux>`_
to install MKL

After installation, you need to set the environment variable `MKL_INCLUDE_PATH` and `MKL_LIB_PATH` for MKL.
Expand Down Expand Up @@ -128,8 +128,9 @@ macOS build
Windows build
^^^^^^^^^^^^^

Building from source is currently not supported. Only supports pip installation. If you need to build from source, The
easiest way to build audioFlux is by cross-compilation on Linux/macOS using MinGW::
Building from source code is currently not supported on Windows; it is recommended to use pip installation.

If you need to build from source, The easiest way to build audioFlux is by cross-compilation on Linux/macOS using MinGW.::

$ python setup.py build_py_win
$ python setup.py install
Expand Down
9 changes: 5 additions & 4 deletions docs/installing.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,10 +84,10 @@ Build and compile successfully, the project build compilation results are in the
$ sudo yum install -y cmake clang
```

2. Installing **MKL** lib dependencies on Linux
2. Installing **MKL** lib dependencies on Linux (Optional, Recommended installation)

You can
use [this installation document](https://www.intel.cn/content/www/cn/zh/developer/tools/oneapi/onemkl-download.html?operatingsystem=linux)
use [this installation document](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html?operatingsystem=linux)
to install MKL

After installation, you need to set the environment variable `MKL_INCLUDE_PATH` and `MKL_LIB_PATH` for MKL.
Expand Down Expand Up @@ -151,8 +151,9 @@ Build and compile successfully, the project build compilation results are in the

#### Windows build

Building from source is currently not supported. Only supports pip installation. If you need to build from source, The
easiest way to build audioFlux is by cross-compilation on Linux/macOS using MinGW.
Building from source code is currently not supported on Windows; it is recommended to use pip installation.

If you need to build from source, The easiest way to build audioFlux is by cross-compilation on Linux/macOS using MinGW.

```shell
$ python setup.py build_py_win
Expand Down
8 changes: 8 additions & 0 deletions docs/mir/harmonics.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
Harmonics
=========

.. autoclass:: audioflux.Harmonic
:members:

.. autoclass:: audioflux.HarmonicRatio
:members:
4 changes: 4 additions & 0 deletions docs/mir/mir.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,10 @@ The mir module contains the following algorithms:
pitch
onsetBase
hpss
harmonics
pitchShift
timeStretch


+-----------------+---------------------------------+
| MIR | algorithm |
Expand Down
11 changes: 7 additions & 4 deletions docs/mir/pitch.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
Pitch
=====

.. autoclass:: audioflux.PitchYIN
:members:

.. autoclass:: audioflux.PitchPEF
:members:

.. autoclass:: audioflux.PitchCEP
:members:

Expand All @@ -13,11 +19,8 @@ Pitch
.. autoclass:: audioflux.PitchNCF
:members:

.. autoclass:: audioflux.PitchPEF
:members:

.. autoclass:: audioflux.PitchSTFT
:members:

.. autoclass:: audioflux.PitchYIN
.. autoclass:: audioflux.PitchFFP
:members:
5 changes: 5 additions & 0 deletions docs/mir/pitchShift.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
PitchShift
==========

.. autoclass:: audioflux.PitchShift
:members:
5 changes: 5 additions & 0 deletions docs/mir/timeStretch.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
TimeStretch
===========

.. autoclass:: audioflux.TimeStretch
:members:
5 changes: 5 additions & 0 deletions docs/transforms/stft.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
STFT - Short Time Fourier Transform
===================================

.. autoclass:: audioflux.STFT
:members:
1 change: 1 addition & 0 deletions docs/transforms/transforms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ The following transform are not supports multiple frequency scale types, only us
dwt
wpt
swt
stft


Synchronized Squeezing
Expand Down
4 changes: 4 additions & 0 deletions python/audioflux/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
from .pwt import *
from .reassign import *
from .st import *
from .stft import *
from .swt import *
from .synsq import *
from .temporal import *
Expand All @@ -24,6 +25,9 @@
from .feature import *
from .core import *
from .spectrogram import MelSpectrogram, BarkSpectrogram, ErbSpectrogram
from . import mir
from . import feature
from . import dsp
from . import utils
from . import type
from . import display
2 changes: 1 addition & 1 deletion python/audioflux/__version__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
__title__ = 'audioflux'
__description__ = 'A library for audio and music analysis, feature extraction.'
__version__ = '0.1.7'
__version__ = '0.1.8'
20 changes: 10 additions & 10 deletions python/audioflux/bft.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ def cal_time_length(self, data_length):
Calculate the length of a frame from audio data.
- ``fft_length = 2 ** radix2_exp``
- ``(data_length - fft_length) / slide_length + 1``
- ``(data_length - fft_length) // slide_length + 1``
Parameters
----------
Expand Down Expand Up @@ -444,25 +444,25 @@ def _get_temporal_data(self, data_length):

fn = self._lib['bftObj_getTemporalData']
fn.argtypes = [POINTER(OpaqueBFT),
POINTER(POINTER(c_float)),
POINTER(POINTER(c_float)),
POINTER(POINTER(c_float)),
POINTER(c_void_p),
POINTER(c_void_p),
POINTER(c_void_p),
]
time_length = self.cal_time_length(data_length)

pp_energy_arr = pointer(pointer(c_float()))
pp_rms_arr = pointer(pointer(c_float()))
pp_zcr_arr = pointer(pointer(c_float()))
pp_energy_arr = pointer(c_void_p())
pp_rms_arr = pointer(c_void_p())
pp_zcr_arr = pointer(c_void_p())

fn(self._obj,
pp_energy_arr,
pp_rms_arr,
pp_zcr_arr
)

energy_arr = np.array([pp_energy_arr.contents[x] for x in range(time_length)], dtype=np.float32)
rms_arr = np.array([pp_rms_arr.contents[x] for x in range(time_length)], dtype=np.float32)
zcr_arr = np.array([pp_zcr_arr.contents[x] for x in range(time_length)], dtype=np.float32)
energy_arr = np.frombuffer((c_float * time_length).from_address(pp_energy_arr[0]), np.float32).copy()
rms_arr = np.frombuffer((c_float * time_length).from_address(pp_rms_arr[0]), np.float32).copy()
zcr_arr = np.frombuffer((c_float * time_length).from_address(pp_zcr_arr[0]), np.float32).copy()

return energy_arr, rms_arr, zcr_arr

Expand Down
1 change: 1 addition & 0 deletions python/audioflux/classic/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from .nmf import *
37 changes: 37 additions & 0 deletions python/audioflux/classic/nmf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
from ctypes import c_int, POINTER, c_float, pointer

import numpy as np

from audioflux.fftlib import get_fft_lib

__all__ = [
'nmf',
]


def nmf(X, k, max_iter=300, tp=0, thresh=1e-3, norm=0):
X = np.asarray(X, dtype=np.float32, order='C')
if X.ndim != 2:
raise ValueError(f"X[ndim={X.ndim}] must be a 2D array")

fn = get_fft_lib()['nmf']
fn.argtypes = [
np.ctypeslib.ndpointer(dtype=np.float32, ndim=2, flags='C_CONTIGUOUS'),
c_int, c_int, c_int,
np.ctypeslib.ndpointer(dtype=np.float32, ndim=2, flags='C_CONTIGUOUS'),
np.ctypeslib.ndpointer(dtype=np.float32, ndim=2, flags='C_CONTIGUOUS'),
POINTER(c_int), POINTER(c_int),
POINTER(c_float), POINTER(c_int)
]

n_len, m_len = X.shape

h_arr = np.arange(1, k * m_len + 1, dtype=np.float32).reshape((k, m_len))
w_arr = np.arange(1, n_len * k + 1, dtype=np.float32).reshape((n_len, k))

fn(X, c_int(n_len), c_int(m_len), c_int(k),
w_arr, h_arr,
pointer(c_int(max_iter)), pointer(c_int(tp)),
pointer(c_float(thresh)), pointer(c_int(norm)))

return h_arr, w_arr
11 changes: 2 additions & 9 deletions python/audioflux/display/display.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,24 +82,20 @@ class TimeFormatter(Formatter):
def __init__(self, lag=False, unit=None):

if unit not in ["s", "ms", None]:
raise ValueError("Unknown time unit: {}".format(unit))
raise ValueError("Time unit is not support: {}".format(unit))

self.unit = unit
self.lag = lag

def __call__(self, x, pos=None):
"""Return the time format as pos"""

_, dmax = self.axis.get_data_interval()
vmin, vmax = self.axis.get_view_interval()

# In lag-time axes, anything greater than dmax / 2 is negative time
if self.lag and x >= dmax * 0.5:
# In lag mode, don't tick past the limits of the data
if x > dmax:
return ""
value = np.abs(x - dmax)
# Do we need to tweak vmin/vmax here?
sign = "-"
else:
value = x
Expand All @@ -111,7 +107,6 @@ def __call__(self, x, pos=None):
s = "{:.3g}".format(value * 1000)
else:
if vmax - vmin > 3600:
# Hours viz
s = "{:d}:{:02d}:{:02d}".format(
int(value / 3600.0),
int(np.mod(value / 60.0, 60)),
Expand All @@ -121,10 +116,8 @@ def __call__(self, x, pos=None):
# Minutes viz
s = "{:d}:{:02d}".format(int(value / 60.0), int(np.mod(value, 60)))
elif vmax - vmin >= 1:
# Seconds viz
s = "{:.2g}".format(value)
else:
# Milliseconds viz
s = "{:.3f}".format(value)

return "{:s}{:s}".format(sign, s)
Expand Down Expand Up @@ -274,7 +267,7 @@ def fill_plot(x, y, axes=None, label='', is_legend=True, *,
axes.set_ylim(*y_lims)

axes.plot(x, y, label=label)
if is_legend:
if is_legend and label:
axes.legend()
return axes

Expand Down
1 change: 1 addition & 0 deletions python/audioflux/dsp/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
from .czt import *
from .resample import *
from .xcorr import *
Loading

0 comments on commit 4d91906

Please sign in to comment.