fix: 听歌识曲接口和 demo 页面问题修复和更新

This commit is contained in:
binaryify 2024-04-23 17:05:28 +08:00
parent 52892d2f03
commit 4fe71a4903
7 changed files with 1922 additions and 1879 deletions

View File

@ -1,49 +1,67 @@
# 更新日志
### 4.16.4 | 2024.04.23
- 听歌识曲接口和 demo 页面问题修复和更新
### 4.16.3 | 2024.04.19
- cookie version 更新
### 4.16.2 | 2024.04.18
- 分享接口问题修复
- cookie 补全
### 4.16.0 | 2024.04.18
- ua 更新,修复接口提示网络拥挤问题
- 支持手动传入 ua 参数,修改 user-agent
### 4.15.8 | 2024.03.29
- 播客声音排序接口更新,补充字段
- 新增 `删除播客`接口
### 4.15.7 | 2024.03.21
- 播客分段上传
### 4.15.6 | 2024.03.12
- 文档和示例更新
### 4.15.5 | 2024.02.28
- 文档更新
### 4.15.3 | 2024.01.29
- 文件重命名,防止部署到 Vercel 404
### 4.15.2 | 2024.01.29
- 上传接口问题修复
### 4.15.1 | 2024.01.26
- 版本更新提示不展示问题修复
### 4.15.0 | 2024.01.26
- 新增 `私人 FM 模式选择` 接口
### 4.14.2 | 2024.01.23
- 文档自启动
### 4.14.1 | 2024.01.13
- UA固定,防止触发风控 #1867
- UA 固定,防止触发风控 #1867
### 4.14.0 | 2023.12.20
- appver更新
- appver 更新
- fix: /artist/detail 登录状态下调用提示网络拥挤的问题 #1853
- 歌曲红心数量,歌曲音质详情,本地歌曲文件匹配网易云歌曲信息 #1852
- crypto.js 重构 #1839
@ -51,144 +69,175 @@
- song_url number 类型修复 #1837
- 更新获取歌曲详情接口的 TypeScript 定义与文档,增加 Hi-Res 类型 #1836
### 4.13.8 | 2023.10.27
- Docker 构建平台支持调整(只支持linux/arm64和linux/amd64)
- Docker 构建平台支持调整(只支持 linux/arm64 和 linux/amd64)
### 4.13.7 | 2023.10.26
- 修复 Docker 构建镜像安装依赖速度慢的问题
### 4.13.6 | 2023.10.26
- 修复匿名登录下,部分接口提示网络太拥挤问题 #1829
### 4.13.5 | 2023.10.22
- Dockfile 更新,移除 linux/s390x 平台,防止构建失败
### 4.13.4 | 2023.10.22
- 修复`更新用户信息`接口报错问题 #1824
- 部分接口移除`avatarImgId_str`字段
- 新增日推歌曲不感兴趣接口 #1816
### 4.13.3 | 2023.10.10
- 添加播客声音搜索接口 #1814
### 4.13.2 | 2023.09.25
- 修改wiki相关接口
- 修改 wiki 相关接口
### 4.13.1 | 2023.09.23
- `/verify/getQr` 补充二维码dataurl返回数据
- `/verify/getQr` 补充二维码 dataurl 返回数据
### 4.13.0 | 2023.09.23
- 新增 `专辑简要百科信息` `歌曲简要百科信息` `歌手简要百科信息` `mv简要百科信息` `搜索歌手` `用户贡献内容` `用户贡献条目、积分、云贝数量` #1805
- 新增 `专辑简要百科信息` `歌曲简要百科信息` `歌手简要百科信息` `mv简要百科信息` `搜索歌手` `用户贡献内容` `用户贡献条目、积分、云贝数量` #1805
- 新增 `年度听歌报告` 接口 #1809
### 4.12.2 | 2023.09.12
- 新增 `播客声音列表`接口
- 修复anonymous_token路径异常 #1795
- 修复 anonymous_token 路径异常 #1795
### 4.12.1 | 2023.09.10
- 补充 `get/userids`(根据nickname获取userid) 接口
- 补充 `get/userids`(根据 nickname 获取 userid) 接口
### 4.12.0 | 2023.09.10
- 听歌识曲接口完善, 补充demo页面
- 听歌识曲接口完善, 补充 demo 页面
- NMTID 动态添加 #1792
- weapi ua 固定
### 4.11.3 | 2023.09.09
- 返回内容的`code`统一处理
- 单元测试问题修复
- song/url 返回排序处理 #1792
### 4.11.2 | 2023.09.09
- 修复`vercel`无文件创建权限问题
### 4.11.1 | 2023.09.08
- `anonymous_token` 配置抽离
- `anonymous_token` 生成稳定性问题修复
### 4.11.0 | 2023.09.07
- 新增 `播客搜索`,`播客上传声音`接口 #1789
### 4.10.2 | 2023.09.04
- 修复docker缺失文件问题 #1791
- 修复 docker 缺失文件问题 #1791
### 4.10.1 | 2023.08.21
- 补充匿名登录username算法, anonymous_token 动态生成
- 补充匿名登录 username 算法, anonymous_token 动态生成
### 4.10.0 | 2023.08.21
- 禁用NMTID, 恢复手机登录和邮箱登录 #1788
- 状态码判断完善,补充verify相关接口 #1783
- 禁用 NMTID, 恢复手机登录和邮箱登录 #1788
- 状态码判断完善,补充 verify 相关接口 #1783
- 增加对带用户名密码的代理支持 #1787
### 4.9.2 | 2023.08.15
- 补充 `/vip/info/v2` 接口
### 4.9.1 | 2023.08.15
- `/vip/info` 接口增加`uid`参数
### 4.9.0 | 2023.07.20
- 新增沉浸环绕声音质,修改部分文案以同步客户端更改 #1760
- 增加对鲸云臻音、鲸云母带音质的支持 #1731
- 新增星评馆简要评论获取接口 #1770
- 更新 song_detail 返回值类型 #1772
- 更新 song_detail 返回值类型 #1772
- NodeJS 环境要求提高至 v14
### 4.8.11 | 2023.05.29
- 支持headers不携带cookie信息
- 支持 headers 不携带 cookie 信息
### 4.8.10 | 2023.04.07
- 补充私信和通知接口
### 4.8.9 | 2023.01.18
- 补充一起听相关接口 #1677
### 4.8.8 | 2023.01.18
- 补充腾讯云serverless部署说明
- 补充腾讯云 serverless 部署说明
- 添加逐字歌词接口 #1669
- CloudSearch接口使用eapi代替weapi #1670
- CloudSearch 接口使用 eapi 代替 weapi #1670
- axios 相关代码调整
### 4.8.7 | 2023.01.04
- 手机登录问题修复 [#1658]
### 4.8.6 | 2023.01.02
- 手机登录问题修复 [#1658]
### 4.8.5 | 2022.12.28
- 手机登录问题修复 [#1661]
### 4.8.4 | 2022.12.19
- 邮箱登录问题修复
### 4.8.3 | 2022.12.19
- 修复了手机号登录接口 [#1653]
- 增加若干曲风相关接口 [#1623]
### 4.8.2 | 2022.09.13
- 修复 song/url 接口提示网络拥堵的问题
- 单元测试修复
### 4.8.1 | 2022.09.12
- 解决网络拥堵提示问题
### 4.7.0 | 2022.09.02
- 新增 API: 新版音乐链接获取 [#1583]
- 新增 API: 歌曲百科简要信息 [#1596]
@ -198,53 +247,67 @@
- ResourceType 补充 [#1497]
### 4.6.7 | 2022.07.17
- 音乐是否可用接口更新 #1544
- 获取精品歌单接口更新描述 #1544
### 4.6.6 | 2022.06.20
- npx 方式运行完善和增加文档说明
### 4.6.5 | 2022.06.19
- 修复npx使用路径错误
- 修复 npx 使用路径错误
### 4.6.4 | 2022.06.15
- 修复歌单收藏/取消收藏歌曲接口报错问题 #1551
### 4.6.3 | 2022.06.15
- 修复 npm 包文件缺失的问题
### 4.6.2 | 2022.05.30
- 修复测试不通过的问题
### 4.6.1 | 2022.05.29
- 修复请求接口提示需要验证的问题,增加游客登录接口,服务启动更新游客cookie
- 修复请求接口提示需要验证的问题,增加游客登录接口,服务启动更新游客 cookie
### 4.6.0 | 2022.05.29
- 修复请求接口提示需要验证的问题 [#1541](https://github.com/Binaryify/NeteaseCloudMusicApi/issues/1541)
- 修复请求接口提示需要验证的问题 [#1541](https://github.com/Binaryify/NeteaseCloudMusicApi/issues/1541)
### 4.5.14 | 2022.05.06
- 修复获取歌单所有歌曲接口分页问题 [#1524](https://github.com/Binaryify/NeteaseCloudMusicApi/pull/1524)
- 增加支持罗马音歌词返回 [#1523](https://github.com/Binaryify/NeteaseCloudMusicApi/pull/1523)
### 4.5.12 | 2022.04.15
- 新增`黑胶时光机`接口 [#1511](https://github.com/Binaryify/NeteaseCloudMusicApi/pull/1511)
### 4.5.11 | 2022.04.06
- 修复云盘接口mimetype获取错误 [#1503](https://github.com/Binaryify/NeteaseCloudMusicApi/pull/1503)
- 修复云盘接口 mimetype 获取错误 [#1503](https://github.com/Binaryify/NeteaseCloudMusicApi/pull/1503)
### 4.5.10 | 2022.03.28
- 修复了若干问题
- 新增`歌单更新播放量`接口
### 4.5.9 | 2022.03.20
- 修复云盘上传接口部分文件名格式上传失败的问题
- 新增 `/inner/version` 接口,用于获取当前版本号
### 4.5.8 | 2022.03.05
- 新增歌手粉丝数量接口[#1485](https://github.com/Binaryify/NeteaseCloudMusicApi/issues/1485)
- 新增音乐人任务(新)接口
@ -252,21 +315,27 @@
- 更新 `appver`
### 4.5.6 | 2022.02.12
- 歌单封面上传接口缺失参数时返回状态码修正
### 4.5.6 | 2022.02.09
- 新增重复昵称检测接口 [#1469](https://github.com/Binaryify/NeteaseCloudMusicApi/issues/1469)
### 4.5.5 | 2022.02.09
- 搜索接口支持搜索声音
### 4.5.4 | 2022.02.09
- 修复云盘上传无法获取到文件的问题
### 4.5.3 | 2022.02.04
- 增加签到进度接口 [#1462](https://github.com/Binaryify/NeteaseCloudMusicApi/pull/1462)
### 4.5.2 | 2022.01.28
- 入口文件优化 [#1457](https://github.com/Binaryify/NeteaseCloudMusicApi/pull/1457)
### 4.5.0 | 2022.01.27

View File

@ -1,26 +1,18 @@
function createRandomString(len) {
const str = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
let result = ''
for (let i = len; i > 0; --i)
result += str[Math.floor(Math.random() * str.length)]
return result
}
module.exports = (query, request) => {
query.cookie.os = 'pc'
const data = {
algorithmCode: 'shazam_v2',
times: 1,
sessionId: createRandomString(16),
duration: Number(query.duration),
from: 'recognize-song',
decrypt: '1',
rawdata: query.audioFP,
}
return request('POST', `https://music.163.com/api/music/audio/match`, data, {
crypto: 'weapi',
cookie: query.cookie,
ua: query.ua || '',
proxy: query.proxy,
realIP: query.realIP,
const { default: axios } = require('axios')
module.exports = async (query, request) => {
const res = await axios({
method: 'get',
url: `https://interface.music.163.com/api/music/audio/match?sessionId=0123456789abcdef&algorithmCode=shazam_v2&duration=${
query.duration
}&rawdata=${encodeURIComponent(query.audioFP)}&times=1&decrypt=1`,
data: null,
})
return {
status: 200,
body: {
code: 200,
data: res.data.data,
},
}
}

View File

@ -1,6 +1,6 @@
{
"name": "NeteaseCloudMusicApi",
"version": "4.16.3",
"version": "4.16.4",
"description": "网易云音乐 NodeJS 版 API",
"scripts": {
"start": "node app.js",

File diff suppressed because it is too large Load Diff

Binary file not shown.

File diff suppressed because one or more lines are too long

View File

@ -1,184 +1,199 @@
<!DOCTYPE html>
<head>
<style>
* {
font-family: sans-serif;
}
pre {
font-family: monospace;
}
a {
font-family: sans-serif;
}
audio {
width: 100%;
}
canvas {
width: 100%;
height: 0;
transition: all linear 0.1s;
}
.canvas-active {
height: 15vh;
}
pre {
overflow: scroll;
}
</style>
<style>
* {
font-family: sans-serif;
}
pre {
font-family: monospace;
}
a {
font-family: sans-serif;
}
audio {
width: 100%;
}
canvas {
width: 100%;
height: 0;
transition: all linear 0.1s;
}
.canvas-active {
height: 15vh;
}
pre {
overflow: scroll;
}
</style>
</head>
<body>
<h1>听歌识曲 Demo (Credit: <a href="https://github.com/mos9527/ncm-afp" target="_blank">https://github.com/mos9527/ncm-afp</a>)</h1>
<p>Usage:</p>
<li>Select your audio file through "Choose File" picker</li>
<li>Seek to a point where your music should sound the most distinct</li>
<li>Hit the "Clip" button and wait for the results!</li>
<p>Sorry if your music somehow sounds 100x awful here, since everything is in <i>telephone quality</i> and that's what <i>they</i>'re using :/</p>
<h1>听歌识曲 Demo (Credit: <a href="https://github.com/mos9527/ncm-afp" target="_blank">https://github.com/mos9527/ncm-afp</a>)</h1>
<hr>
<p><b>DISCLAIMER: </b></p>
<p>This site uses the offical NetEase audio matcher APIs (reverse engineered from <a
href="https://fn.music.163.com/g/chrome-extension-home-page-beta/">https://fn.music.163.com/g/chrome-extension-home-page-beta/</a>)
</p>
<p>And DOES NOT condone copyright infringment nor intellectual property theft.</p>
<hr>
<p><b>NOTE:</b></p>
<p>Before start using the site, you may want to access this link first:</p>
<a href="https://cors-anywhere.herokuapp.com/corsdemo">https://cors-anywhere.herokuapp.com/corsdemo</a>
<p>Since Netease APIs do not have CORS headers, this is required to alleviate this restriction.</p>
<hr>
<p>Usage:</p>
<li>Select your audio file through "Choose File" picker</li>
<li>Hit the "Clip" button and wait for the results!</li>
<audio id="audio" controls autoplay></audio>
<canvas id="canvas"></canvas>
<button id="invoke">Clip</button>
<input type="file" name="picker" accept="*" id="file">
<hr>
<label for="use-mic">Listen from microphone</label>
<input type="checkbox" name="use-mic" id="usemic">
<hr>
<pre id="logs"></pre>
<audio id="audio" controls autoplay></audio>
<canvas id="canvas"></canvas>
<button id="invoke">Clip</button>
<input type="file" name="picker" accept="*" id="file">
<hr>
<label for="use-mic">Mix in Microphone input</label>
<input type="checkbox" name="use-mic" id="usemic">
<hr>
<pre id="logs"></pre>
</body>
<script src="./afp.wasm.js"></script>
<script src="./afp.js"></script>
<script type="module">
import { InstantiateRuntime , GenerateFP } from './afp.js'
const duration = 5
const duration = 3
let audioCtx, recorderNode, micSourceNode
let audioBuffer,bufferHealth
let runtime = InstantiateRuntime()
let audio = document.getElementById('audio')
let file = document.getElementById('file')
let clip = document.getElementById('invoke')
let usemic = document.getElementById('usemic')
let canvas = document.getElementById('canvas')
let canvasCtx = canvas.getContext('2d')
let logs = document.getElementById('logs')
logs.write = line => logs.innerHTML += line + '\n'
function RecorderCallback(channelL){
let sampleBuffer = new Float32Array(channelL.subarray(0, duration * 8000))
let FP = GenerateFP(sampleBuffer)
logs.write(`[index] Generated FP ${FP}`)
logs.write('[index] Now querying, please wait...')
fetch(
'/audio/match?' +
new URLSearchParams(Object.assign({
audioFP: FP,
duration: duration
}))
).then(resp => resp.json()).then(resp => {
if (!resp.data.result){
return logs.write('[index] Query failed with no results.')
}
logs.write(`[index] Query complete. Results=${resp.data.result.length}`)
for (var song of resp.data.result) {
logs.write(
`<a target="_blank" href="https://music.163.com/song?id=${song.song.id}">${song.song.name} - ${song.song.album.name} (${song.startTime / 1000}s)</a>`
)
}
})
}
let audioCtx, recorderNode, micSourceNode
let audioBuffer, bufferHealth
let audio = document.getElementById('audio')
let file = document.getElementById('file')
let clip = document.getElementById('invoke')
let usemic = document.getElementById('usemic')
let canvas = document.getElementById('canvas')
let canvasCtx = canvas.getContext('2d')
let logs = document.getElementById('logs')
logs.write = line => logs.innerHTML += line + '\n'
function InitAudioCtx(){
// AFP.wasm can't do it with anything other than 8KHz
audioCtx = new AudioContext({ 'sampleRate': 8000 })
if (audioCtx.state == 'suspended')
return false
let audioNode = audioCtx.createMediaElementSource(audio)
audioCtx.audioWorklet.addModule('rec.js').then(() => {
recorderNode = new AudioWorkletNode(audioCtx, 'timed-recorder')
audioNode.connect(recorderNode) // recorderNode doesn't output anything
audioNode.connect(audioCtx.destination)
recorderNode.port.onmessage = event => {
switch (event.data.message) {
case 'finished':
RecorderCallback(event.data.recording)
clip.innerHTML = 'Clip'
clip.disabled = false
canvas.classList.remove('canvas-active')
break
case 'bufferhealth':
clip.innerHTML = `${(duration * (1-event.data.health)).toFixed(2)}s`
bufferHealth = event.data.health
audioBuffer = event.data.recording
break
default:
logs.write(event.data.message)
}
}
// Attempt to get user's microphone and connect it to the AudioContext.
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
autoGainControl: false,
noiseSuppression: false,
latency: 0,
},
}).then(micStream=>{
micSourceNode = audioCtx.createMediaStreamSource(micStream);
micSourceNode.connect(recorderNode)
usemic.checked = true
logs.write('[rec.js] Microphone attached.')
});
});
return true
}
runtime.then(() => logs.write('[index] Wasm module loaded.'))
clip.addEventListener('click', event => {
recorderNode.port.postMessage({
message: 'start', duration: duration
})
clip.disabled = true
canvas.classList.add('canvas-active')
})
usemic.addEventListener('change',event=>{
if (!usemic.checked)
micSourceNode.disconnect(recorderNode)
else
micSourceNode.connect(recorderNode)
})
file.addEventListener('change', event => {
file.files[0].arrayBuffer().then(
async buffer => {
await runtime
logs.write(`[index] File ${file.files[0].name} loaded.`)
audio.src = window.URL.createObjectURL(new Blob([buffer]))
clip.disabled = false
})
})
function UpdateCanvas(){
let w = canvas.clientWidth, h = canvas.clientHeight
canvas.width = w,canvas.height = h
canvasCtx.fillStyle = 'rgba(0,0,0,0)';
canvasCtx.fillRect(0, 0, w,h);
if (audioBuffer){
canvasCtx.fillStyle = 'black';
for (var x=0;x<w * bufferHealth;x++){
var y = audioBuffer[Math.ceil((x / w) * audioBuffer.length)]
var z = Math.abs(y) * h / 2
canvasCtx.fillRect(x,h / 2 - (y > 0 ? z : 0),1,z)
}
function RecorderCallback(channelL) {
let sampleBuffer = new Float32Array(channelL.subarray(0, duration * 8000))
GenerateFP(sampleBuffer).then(FP => {
logs.write(`[index] Generated FP ${FP}`)
logs.write('[index] Now querying, please wait...')
fetch(
'/audio/match?' +
new URLSearchParams({
duration: duration, audioFP: FP
}), {
method: 'POST'
}).then(resp => resp.json()).then(resp => {
if (!resp.data.result) {
return logs.write('[index] Query failed with no results.')
}
requestAnimationFrame(UpdateCanvas)
}
UpdateCanvas()
let requestCtx = setInterval(()=>{
try {
if (InitAudioCtx()) { // Put this here so we don't have to deal with the 'user did not interact' thing
clearInterval(requestCtx)
logs.write('[rec.js] Audio Context started.')
}
} catch {
// Fail silently
logs.write(`[index] Query complete. Results=${resp.data.result.length}`)
for (var song of resp.data.result) {
logs.write(
`[result] <a target="_blank" href="https://music.163.com/song?id=${song.song.id}">${song.song.name} - ${song.song.album.name} (${song.startTime / 1000}s)</a>`
)
}
},100)
})
})
}
function InitAudioCtx() {
// AFP.wasm can't do it with anything other than 8KHz
audioCtx = new AudioContext({ 'sampleRate': 8000 })
if (audioCtx.state == 'suspended')
return false
let audioNode = audioCtx.createMediaElementSource(audio)
audioCtx.audioWorklet.addModule('rec.js').then(() => {
recorderNode = new AudioWorkletNode(audioCtx, 'timed-recorder')
audioNode.connect(recorderNode) // recorderNode doesn't output anything
audioNode.connect(audioCtx.destination)
recorderNode.port.onmessage = event => {
switch (event.data.message) {
case 'finished':
RecorderCallback(event.data.recording)
clip.innerHTML = 'Clip'
clip.disabled = false
canvas.classList.remove('canvas-active')
break
case 'bufferhealth':
clip.innerHTML = `${(duration * (1 - event.data.health)).toFixed(2)}s`
bufferHealth = event.data.health
audioBuffer = event.data.recording
break
default:
logs.write(event.data.message)
}
}
// Attempt to get user's microphone and connect it to the AudioContext.
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
autoGainControl: false,
noiseSuppression: false,
latency: 0,
},
}).then(micStream => {
micSourceNode = audioCtx.createMediaStreamSource(micStream);
micSourceNode.connect(recorderNode)
usemic.checked = true
logs.write('[rec.js] Microphone attached.')
});
});
return true
}
clip.addEventListener('click', event => {
recorderNode.port.postMessage({
message: 'start', duration: duration
})
clip.disabled = true
canvas.classList.add('canvas-active')
})
usemic.addEventListener('change', event => {
if (!usemic.checked)
micSourceNode.disconnect(recorderNode)
else
micSourceNode.connect(recorderNode)
})
file.addEventListener('change', event => {
file.files[0].arrayBuffer().then(
async buffer => {
logs.write(`[index] File ${file.files[0].name} loaded.`)
audio.src = window.URL.createObjectURL(new Blob([buffer]))
clip.disabled = false
})
})
function UpdateCanvas() {
let w = canvas.clientWidth, h = canvas.clientHeight
canvas.width = w, canvas.height = h
canvasCtx.fillStyle = 'rgba(0,0,0,0)';
canvasCtx.fillRect(0, 0, w, h);
if (audioBuffer) {
canvasCtx.fillStyle = 'black';
for (var x = 0; x < w * bufferHealth; x++) {
var y = audioBuffer[Math.ceil((x / w) * audioBuffer.length)]
var z = Math.abs(y) * h / 2
canvasCtx.fillRect(x, h / 2 - (y > 0 ? z : 0), 1, z)
}
}
requestAnimationFrame(UpdateCanvas)
}
UpdateCanvas()
let requestCtx = setInterval(() => {
try {
if (InitAudioCtx()) { // Put this here so we don't have to deal with the 'user did not interact' thing
clearInterval(requestCtx)
logs.write('[rec.js] Audio Context started.')
}
} catch {
// Fail silently
}
}, 100)
</script>