|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2025-07-03_11:47:53_ | 2025-07-03 11:47:53 | Speeding up Large Memory VM Boot with QEMU ThreadContext | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 性能 | Summary: This blog post discusses the importance of fast virtual machine (VM) boot times in virtualization environments for efficient resource management and improved user experience. The post explains that VMs are commonly configured with preallocated memory for better performance, but the downside is that it can lead to slow boot times due to the upfront commitment of resources and time required for initialization. The post introduces QEMU's ThreadContext feature, which can be used to optimize memory preallocation and reduce VM boot time. ThreadContext ensures that initialization threads are placed on the same NUMA node as the associated memory region and allows for parallel initialization, leading to significant time savings. The post provides instructions on how to use ThreadContext for memory preallocation in QEMU. The results show a 55% reduction in memory preallocation time compared to the baseline without ThreadContext.这篇博文讨论了在虚拟化环境中快速启动虚拟机 (VM) 对于高效资源管理和改善用户体验的重要性。该博文解释说,VM 通常配置预分配的内存以获得更好的性能,但缺点是,由于初始化所需的资源和时间的前期承诺,这可能会导致启动时间变慢。该博文介绍了 QEMU 的 ThreadContext 功能,该功能可用于优化内存预分配并缩短 VM 启动时间。ThreadContext 确保初始化线程与关联的内存区域位于同一 NUMA 节点上,并允许并行初始化,从而节省大量时间。该博文提供了有关如何在 QEMU 中使用 ThreadContext 进行内存预分配的说明。结果显示,与没有 ThreadContext 的基线相比,内存预分配时间减少了 55%。 | |
_2025-06-26_16:22:19_ | 2025-06-26 16:22:19 | soc-infra@lists.riscv.org _ Home | 原文链接失效了?试试备份 | TAGs:处理器 risc-v | Summary: The SOC Infrastructure Horizontal committee is responsible for components straddling the hardware/software boundary in various products, from IoT to data centers. These components, which include those necessary for system boot and operation, often overlap with other committees such as security and RAS. The goal is to establish a comprehensive set of specifications for product implementers, reducing duplication and fragmentation within the RISC-V community.SOC 基础设施横向委员会负责从 IoT 到数据中心的各种产品中跨越硬件/软件边界的组件。这些组件(包括系统启动和作所需的组件)通常与其他委员会(如安全和 RAS)重叠。目标是为产品实施者建立一套全面的规范,减少 RISC-V 社区内的重复和碎片化。 | |
_2025-06-26_14:44:25_ | 2025-06-26 14:44:25 | RISC-V Technical Specifications - Home - RISC-V Tech Hub | 原文链接失效了?试试备份 | TAGs:处理器 risc-v SPEC | Summary: The RISC-V Technical Specifications page provides a comprehensive list of all ratified technical publications for the RISC-V instruction set architecture. This includes ISA specifications, profiles, and non-ISA specifications. The ISA specifications include the Unprivileged ISA and Privileged Architecture manuals. Profiles include the RVA23 and RISC-V Profiles 1.0. Non-ISA specifications cover various topics such as efficient trace for RISC-V, RISC-V ABIs, RISC-V Advanced Interrupt Architecture, and RISC-V Capacity and Bandwidth QoS Register Interface. The RISC-V Architectural Compatibility Test Framework is also available for ensuring model compatibility.RISC-V 技术规格页面提供了 RISC-V 指令集架构的所有已批准技术出版物的完整列表。这包括 ISA 规范、配置文件和非 ISA 规范。ISA 规范包括 Unprivileged ISA 和 Privileged Architecture 手册。配置文件包括 RVA23 和 RISC-V 配置文件 1.0。非 ISA 规范涵盖各种主题,例如 RISC-V 的高效跟踪、RISC-V ABI、RISC-V 高级中断架构以及 RISC-V 容量和带宽 QoS 寄存器接口。RISC-V 架构兼容性测试框架也可用于确保模型兼容性。 | |
_2025-06-26_11:59:38_ | 2025-06-26 11:59:38 | Linux Plumbers Conference | 原文链接失效了?试试备份 | TAGs:操作系统 linux 会议 | Summary: The Linux Plumbers Conference is a prominent event for developers working on the plumbing layer of Linux systems and beyond. It caters to developers at various levels of expertise.Linux 管道工会议是面向 Linux 系统及其他管道层的开发人员的重要活动。它迎合了不同专业知识水平的开发人员。 | |
_2025-06-25_18:56:43_ | 2025-06-25 18:56:43 | 青年软件工程师的不稳定劳动状况及行动逻辑——基于一家人工智能企业的调查 | 原文链接失效了?试试备份 | TAGs:company&job | Summary: Software engineering professionals value career development and self-worth primarily in three ways. First, they prioritize a culture that encourages loyalty, reliability, and responsibility, allowing software engineers to grow with the company (Kunda, 2006: 70-71). Second, they seek fair development opportunities, including salary and career advancement that aligns with their market position and work performance (Jia & You, 2024). Lastly, they embrace an innovative spirit, desiring to work at the technological forefront and contribute to innovation (Castres, 2007: 43). Instability, however, can significantly harm software engineers' self-worth.软件工程专业人士主要通过三种方式重视职业发展和自我价值。首先,他们优先考虑鼓励忠诚度、可靠性和责任感的文化,让软件工程师与公司一起成长(Kunda,2006:70-71)。其次,他们寻求公平的发展机会,包括与他们的市场地位和工作表现相符的薪水和职业发展(Jia & You),2024)。最后,他们拥抱创新精神,渴望在技术前沿工作并为创新做出贡献(Castres,2007:43)。然而,不稳定会严重损害软件工程师的自我价值。 | |
_2025-06-25_18:50:14_ | 2025-06-25 18:50:14 | 作者手记丨人工智能热的断续之思 | 原文链接失效了?试试备份 | TAGs:company&job | Summary: Young programmers entered the industry with valuable self-worth, such as a culture emphasizing loyalty and reliability, a geek spirit seeking innovation, and professional expectations of freedom and fairness. However, in the face of real-world pressures, many had to change their mindsets and imitate employers, jumping ship to companies with higher salaries when their skills were still in demand.年轻的程序员带着宝贵的自我价值进入这个行业,例如强调忠诚和可靠性的文化、寻求创新的极客精神以及对自由和公平的专业期望。然而,面对现实世界的压力,许多人不得不改变心态并模仿雇主,当他们的技能仍然供不应求时,他们跳槽到薪水更高的公司。 | |
_2025-06-24_15:53:03_ | 2025-06-24 15:53:03 | Aya - an eBPF library built from the ground up purely in Rust | 原文链接失效了?试试备份 | TAGs:数据中心 eBPF Rust | Summary: eBPF is a technology that enables running user-supplied programs inside the Linux kernel. Aya is an eBPF library built in Rust, offering a true compile once, run everywhere solution with features like BTF support, function call relocation, and async support. Notable users of Aya include Anza, Deepfence, Exein, and Kubernetes SIGs. Aya is known for its easy deployment and fast build time.eBPF 是一种允许在 Linux 内核中运行用户提供的程序的技术。Aya 是一个用 Rust 构建的 eBPF 库,提供真正的一次编译、随处运行的解决方案,具有 BTF 支持、函数调用重定位和异步支持等功能。Aya 的著名用户包括 Anza、Deepfence、Exein 和 Kubernetes SIG。Aya 以其易于部署和快速构建时间而闻名。 | |
_2025-06-24_15:48:50_ | 2025-06-24 15:48:50 | Eunomia - Unlock the potential of eBPF - eunomia | 原文链接失效了?试试备份 | TAGs:数据中心 eBPF | Summary: Eunomia is an open-source organization focused on enhancing the eBPF ecosystem through tools and frameworks. Their projects include bpftime, a high-performance eBPF runtime, and Wasm-bpf, a user-space development library for eBPF programs based on WebAssembly. They also offer practical tutorials and tools for generating eBPF programs using natural language.Eunomia 是一个开源组织,专注于通过工具和框架增强 eBPF 生态系统。他们的项目包括 bpftime(高性能 eBPF 运行时)和 Wasm-bpf(基于 WebAssembly 的 eBPF 程序的用户空间开发库)。他们还提供使用自然语言生成 eBPF 程序的实用教程和工具。 | |
_2025-06-24_15:24:58_ | 2025-06-24 15:24:58 | eBPF 教程:BPF 调度器入门 - eunomia | 原文链接失效了?试试备份 | TAGs:数据中心 eBPF 调度 | Summary: This text is a tutorial about eBPF scheduler, focusing on the sched_ext scheduler in the Linux kernel version 6.12. The tutorial explains the architecture of sched_ext, how to use BPF programs to define scheduling behavior, and guides the reader in compiling and running an example. The sched_ext scheduler is a flexible and customizable scheduler that allows the implementation of any scheduling algorithm on top of it. Its key features include flexible scheduling algorithms, dynamic CPU grouping, runtime control, system integrity, and debug support. The tutorial covers the core of the sched_ext tutorial, which is the sched_ext scheduler class. Unlike traditional schedulers, sched_ext allows scheduling behavior to be defined dynamically through a set of BPF programs, making it highly adaptable and customizable. This means that any scheduling algorithm can be implemented on sched_ext to meet specific requirements. The tutorial then introduces scx_simple, a minimal example of a sched_ext scheduler. It is designed to be simple and easy to understand, and provides a foundation for more complex scheduling policies. Scx_simple can run in two modes: global vtime mode and FIFO mode. Global vtime mode sorts tasks based on their virtual time priority, ensuring fairness between different workloads. FIFO mode, based on a simple queue, executes tasks in the order they arrive. The tutorial covers the use cases and applicability of scx_simple, and provides code analysis in both the kernel and user space. In the kernel space, the tutorial shows the complete code segments and explains their functions. In the user space, the tutorial covers the implementation of the read_stats function, which collects and reports statistics on the local and global queues. The tutorial concludes by summarizing the importance of sched_ext and eBPF in creating and managing advanced scheduling policies. The tutorial provides references to the sched_ext repository, Linux kernel documentation, eBPF official documentation, and the libbpf documentation.本文是关于 eBPF 调度器的教程,重点介绍 Linux 内核 6.12 版本中的 sched_ext 调度器。本教程介绍了 sched_ext 的架构,如何使用 BPF 程序定义调度行为,并指导读者编译和运行示例。sched_ext 调度程序是一个灵活且可自定义的调度程序,允许在其上实施任何调度算法。其主要功能包括灵活的调度算法、动态 CPU 分组、运行时控制、系统完整性和调试支持。本教程涵盖了 sched_ext 教程的核心,即 sched_ext 计划程序类。与传统调度程序不同,sched_ext 允许通过一组 BPF 程序动态定义调度行为,使其具有高度的适应性和可定制性。这意味着可以在 sched_ext 上实施任何调度算法以满足特定要求。然后,本教程介绍了 scx_simple,这是 sched_ext 计划程序的最小示例。它设计为简单易懂,并为更复杂的计划策略提供了基础。Scx_simple 可以在两种模式下运行:全局 vtime 模式和 FIFO 模式。全局 vtime 模式根据任务的虚拟时间优先级对任务进行排序,从而确保不同工作负载之间的公平性。FIFO 模式基于简单队列,按照任务到达的顺序执行任务。本教程涵盖了 scx_simple 的使用案例和适用性,并提供了内核和用户空间中的代码分析。在内核领域,本教程展示了完整的代码段并解释了它们的功能。在用户空间中,本教程介绍了 read_stats 函数的实现,该功能收集并报告本地和全局队列的统计信息。 本教程最后总结了 sched_ext 和 eBPF 在创建和管理高级调度策略中的重要性。本教程提供了 sched_ext 仓库、Linux 内核文档、eBPF 官方文档和 libbpf 文档的参考。 | |
_2025-06-24_15:15:35_ | 2025-06-24 15:15:35 | 探索Google的crosvm:一款轻量级虚拟化解决方案-CSDN博客 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 Rust 虚拟机 | Summary: The blog post introduces Crosvm, an open-source project developed by Google for managing virtual machines on Chrome OS. Crosvm is written mainly in Rust and uses KVM to provide a virtualization solution on Chrome OS. It allows users to run multiple independent operating systems on a single hardware platform, enabling various applications such as software development, sandbox environments, and multi-tasking. The post also discusses the features of Crosvm, including its lightweight design, high performance, extensibility, and security.该博客文章介绍了 Crosvm,这是 Google 开发的一个开源项目,用于管理 Chrome OS 上的虚拟机。Crosvm 主要用 Rust 编写,并使用 KVM 在 Chrome OS 上提供虚拟化解决方案。它允许用户在单个硬件平台上运行多个独立的作系统,从而支持各种应用程序,例如软件开发、沙盒环境和多任务处理。该博文还讨论了 Crosvm 的特性,包括其轻量级设计、高性能、可扩展性和安全性。 | |
_2025-06-24_15:10:54_ | 2025-06-24 15:10:54 | Index of _bpfconf2024_material | 原文链接失效了?试试备份 | TAGs:数据中心 eBPF bpfconf | Summary: This is a list of files in a directory for the BPF conference 2024, including PDFs, images, and text files, with sizes and last modified dates indicated. The files cover various topics related to BPF, such as BPF-IETF status, performance, compiler issues, and evolution. Some files were modified on June 21, 2024, while others were modified later. The total file size is around 6.5MB.这是 2024 年 BPF 会议目录中的文件列表,包括 PDF、图像和文本文件,并标明了大小和上次修改日期。这些文件涵盖了与 BPF 相关的各种主题,例如 BPF-IETF 状态、性能、编译器问题和演变。一些文件于 2024 年 6 月 21 日修改,而其他文件则稍后修改。总文件大小约为 6.5MB。 | |
_2025-06-24_15:10:33_ | 2025-06-24 15:10:33 | Index of _bpfconf2023_material | 原文链接失效了?试试备份 | TAGs:数据中心 eBPF bpfconf | Summary: This text describes a list of files in a directory related to the BPF (Berkeley Packet Filter) conference 2023. The files include PDF presentations, documents, and images, with sizes ranging from a few hundred KB to over 2 MB. The files were last modified between May 7, 2023, and May 24, 2023.本文描述了与 2023 年 BPF(Berkeley Packet Filter)会议相关的目录中的文件列表。这些文件包括 PDF 演示文稿、文档和图像,大小从几百 KB 到超过 2 MB 不等。这些文件的最后修改时间为 2023 年 5 月 7 日至 2023 年 5 月 24 日。 | |
_2025-06-24_15:10:15_ | 2025-06-24 15:10:15 | Index of _bpfconf2022_material | 原文链接失效了?试试备份 | TAGs:数据中心 eBPF bpfconf | Summary: This text describes a directory listing for a webpage, which includes various files such as images and PDFs, along with their sizes and last modified dates. The files are related to the BPF conference 2022 and have names like "bpf\_logo.png," "bpfconf\_group\_1.jpg," and "lsfmmbpf2022-networking.pdf." The server from which this directory is accessed is Apache/2.0.52 running on CentOS.此文本描述了网页的目录列表,其中包括各种文件(如图像和 PDF)以及它们的大小和上次修改日期。这些文件与 BPF Conference 2022 相关,名称类似于“bpf\_logo.png”、“bpfconf\_group\_1.jpg”和“lsfmmbpf2022-networking.pdf”。访问此目录的服务器是运行在 CentOS 上的 Apache/2.0.52。 | |
_2025-06-24_14:48:56_ | 2025-06-24 14:48:56 | 动态vcpu优先级管理_ebpf 调度策略-CSDN博客 | 原文链接失效了?试试备份 | TAGs:数据中心 eBPF 调度 使用eBPF进行半虚拟化调度 | Summary: This blog post discusses the use of eBPF in implementing half virtualization scheduling and dynamic vCPU priority management. The motivation behind this is the issue of double scheduling in virtualization, where both the host and guest have their own schedulers, leading to a lack of awareness between them regarding the tasks being run on each other's vCPUs. This can result in issues such as delays, increased power consumption, and resource underutilization. | |
_2025-06-24_14:33:21_ | 2025-06-24 14:33:21 | eBPF 示例教程:实现 scx_nest 内核调度器 - 知乎 | 原文链接失效了?试试备份 | TAGs:数据中心 eBPF | Summary: This text describes the implementation of the scx_nest scheduler, a modern eBPF program that dynamically adjusts task assignment based on CPU core frequency and utilization rate to optimize system performance. The article explains that the sched_ext scheduler class, introduced in Linux kernel version 6.12, is a significant advancement in kernel scheduling capabilities. Unlike traditional schedulers, sched_ext allows the definition of scheduler behavior through a set of BPF programs, providing flexibility for developers to implement custom scheduling algorithms tailored to specific workloads and system requirements. | |
|
_2025-06-20_11:16:32_ | 2025-06-20 11:16:32 | 【KVM虚拟化技术深度解析】:掌握QEMU-KVM的CPU管理秘籍 - CSDN文库 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 cpu | Summary: This text provides a comprehensive overview and in-depth analysis of KVM virtualization technology. It begins by introducing the basics of KVM virtualization and its collaboration with QEMU. Then, it explores CPU virtualization technology, including hardware assisted virtualization, challenges and solutions, CPU scheduling policies, and performance optimization cases. Through practical chapters, the text offers techniques for virtual machine CPU configuration, hot plugging, and dynamic migration. It also discusses advanced CPU management techniques, such as multi-virtual CPU configuration optimization, CPU affinity and isolation, performance tuning, and fault diagnosis. Lastly, it looks to the future of KVM virtualization CPU management, including new hardware support, community development, and contributions. This text is valuable for virtualization technology developers and administrators, providing essential information and practical guidelines.本文对 KVM 虚拟化技术进行了全面概述和深入分析。它首先介绍了 KVM 虚拟化的基础知识及其与 QEMU 的协作。然后,它探讨了 CPU 虚拟化技术,包括硬件辅助虚拟化、挑战和解决方案、CPU 调度策略和性能优化案例。通过实践章节,该教材提供了虚拟机 CPU 配置、热插拔和动态迁移的技术。它还讨论了高级 CPU 管理技术,例如多虚拟 CPU 配置优化、CPU 关联和隔离、性能调整和故障诊断。最后,它展望了 KVM 虚拟化 CPU 管理的未来,包括新硬件支持、社区开发和贡献。本文对虚拟化技术开发人员和管理员很有价值,它提供了基本信息和实用指南。 | |
_2025-06-19_14:36:38_ | 2025-06-19 14:36:38 | 香山开源处理器用户手册 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 香山 | Summary: This document is the user manual for the XiangShan open source processor, specifically for the Kunming Lake V2R2. The latest version of the document can be obtained from the provided links: web version - , PDF file - . The document is licensed under CC BY 4.0 and is subject to the terms of the license. The document provides preliminary information and may be updated irregularly. No warranties are given for the statements, information, or suggestions in the document. | |
_2025-06-18_16:40:36_ | 2025-06-18 16:40:36 | Github Proxy 文件代理加速 | 原文链接失效了?试试备份 | TAGs:代码管理 git | Summary: This text describes a GitHub proxy website that accelerates GitHub file access, improving download experience. It's a charitable service, please do not abuse it. The available acceleration sources are from generous contributors. The page lists several available nodes, their delays, and statuses. Users can operate on specific nodes. The delay is measured when accessing the site. The text also includes user comments about the site's speed and functionality.本文介绍了一个 GitHub 代理网站,该网站可加速 GitHub 文件访问,从而改善下载体验。这是一项慈善服务,请不要滥用它。可用的加速源来自慷慨的贡献者。该页面列出了几个可用节点、它们的延迟和状态。用户可以对特定节点进行作。延迟是在访问站点时测量的。文本还包括用户对网站速度和功能的评论。 | |
|
_2025-06-16_18:53:03_ | 2025-06-16 18:53:03 | LLM 推理优化竟然和操作系统这么像?一文看懂 Page Attention 与 vLLM 的底层设计哲学 - 知乎 | 原文链接失效了?试试备份 | TAGs:大模型 | Summary: This article discusses the similarities between the design philosophies of Page Attention in vLLM and operating system page management. The author explains that in the process of LLM inference, there are resource scheduling issues similar to those in operating systems. The Page Attention mechanism in vLLM aims to solve the problems of memory fragmentation and context cache reuse efficiency in LLM inference by drawing inspiration from operating system paging management. The author also highlights the benefits of PagedAttention, such as eliminating external fragmentation, reducing internal fragmentation, and supporting sharing and copying of KV Cache between requests. The article also mentions the concept of lazy allocation and on-demand allocation in operating systems and how vLLM adopts a similar strategy. The author concles that by viewing models as "a service system," one can find inspiration from operating system experiences. The article is written by YixinMLSys and has been read and liked by 602 people. | |
|
_2025-06-13_15:48:51_ | 2025-06-13 15:48:51 | 解析RISCV fence指令 - RISC-V - 进迭RISC-V论坛 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA FENCE | Summary: This text discusses the use of fence instructions in the RISC-V instruction set to ensure ordered memory access for specific software scenarios. Fence instructions ensure that operations before the fence occur before those after it, preventing unpredictable results. The text provides an example of how fence instructions are used to ensure the order of store and load operations for two cores. It also explains the different formats and uses of fence instructions, including fence.i for ensuring ordered memory access for instruction fetch.本文讨论了在 RISC-V 指令集中使用 fence 指令来确保特定软件场景的有序内存访问。围栏指令可确保围栏之前的作先于围栏之后的作发生,从而防止出现不可预知的结果。该文本提供了一个示例,说明如何使用 fence 指令来确保两个内核的 store 和 load 作的顺序。它还解释了 fence 指令的不同格式和用法,包括 fence.i 以确保指令获取的有序内存访问。 | |
|
_2025-06-08_17:35:54_ | 2025-06-08 17:35:54 | 北京市发展和改革委员会 | 原文链接失效了?试试备份 | TAGs:生活 城建 | Summary: This text is a list of various announcements, policy explanations, and policy documents related to the Beijing municipal government. Topics include education, energy, housing, transportation, and economic development. Some announcements involve the publication or release of reports, while others concern the approval of projects or the solicitation of opinions on proposed policies. Some announcements also provide contact information for consultation and inquiry.本文是与北京市政府相关的各种公告、政策说明和政策文件的列表。主题包括教育、能源、住房、交通和经济发展。一些公告涉及报告的发布或发布,而另一些公告则涉及项目的批准或对拟议政策的意见征求。一些公告还提供联系信息,以供咨询和查询。 | |
_2025-06-08_17:17:15_ | 2025-06-08 17:17:15 | RISC-V SoCReady SystemVIP - Breker Verification Systems | 原文链接失效了?试试备份 | TAGs:处理器 验证 | Summary: The RISC-V SoCReady SystemVIP is a comprehensive verification solution for RISC-V SoCs from Breker. It includes a test suite for functional and performance operation evaluation, synthesis technologies for increased coverage and corner case detection, and is portable across various execution platforms. The test suite covers various aspects such as memory tests, system coherency, paging/IOMMU, system security, power management, and packet generation. The SystemVIP is built on Breker's Test Suite Synthesis platform for effective bug hunting and scenario modeling.RISC-V SoCReady SystemVIP 是 Breker 为 RISC-V SoC 提供的全面验证解决方案。它包括一个用于功能和性能作评估的测试套件,用于增加覆盖范围和极端情况检测的综合技术,并且可以在各种执行平台上移植。该测试套件涵盖内存测试、系统一致性、分页/IOMMU、系统安全、电源管理和数据包生成等各个方面。SystemVIP 基于 Breker 的 Test Suite Synthesis 平台构建,用于有效的错误搜寻和场景建模。 | |
|
_2025-06-06_20:05:18_ | 2025-06-06 20:05:18 | 说人话之什么是Transformer? | 原文链接失效了?试试备份 | TAGs:大模型 | Summary: The Transformer is a technology used in AI models, such as ChatGPT, which allows machines to understand and process text as effectively as humans do. It uses attention mechanisms to help machines focus on important information and relationships within text, improving reading efficiency and resolving the long-distance dependency problem. The attention mechanism works by allowing each word in a sentence to "look back" at other words in the sentence and determine their relationship. This helps the model understand the dependencies between words, even if they are far apart in the sentence. The Transformer also uses self-attention and multi-head attention mechanisms to enhance its capabilities.Transformer 是一种用于 AI 模型(例如 ChatGPT)的技术,它使机器能够像人类一样有效地理解和处理文本。它使用注意力机制来帮助机器专注于文本中的重要信息和关系,提高阅读效率并解决远距离依赖问题。注意力机制的工作原理是允许句子中的每个单词 “回顾” 句子中的其他单词并确定它们之间的关系。这有助于模型理解单词之间的依赖关系,即使它们在句子中相距甚远。Transformer 还使用自注意力和多头注意力机制来增强其能力。 | |
_2025-06-06_16:12:28_ | 2025-06-06 16:12:28 | 说人话之什么是Token? | 原文链接失效了?试试备份 | TAGs:大模型 | Summary: This article explains the concept of a Token in the context of large models, specifically in natural language processing. A Token is a unit used by models to understand and process human language. It can be a single character, a word, or even a punctuation mark. The length and definition of a Token depend on the tokenizer used by the model. In English, spaces between words make tokenization easier, but in Chinese, where words don't have inherent spaces, tokenization is more complex. Modern large models use subword tokenization, which breaks down words into smaller units like prefixes, suffixes, or common combinations, to balance language expressiveness and model efficiency. This allows the model to understand the meaning of words through the relationships between these subwords. For example, the word "unhappiness" can be broken down into "un," "happy," and "ness," even if the model hasn't seen the word before, it can infer the meaning based on the subwords. In summary, Tokens are essential for large models to understand and process human language by converting complex language information into standardized units that models can handle.本文介绍了大型模型上下文中的 Token 概念,特别是在自然语言处理中。Token 是模型用来理解和处理人类语言的单元。它可以是单个字符、单词,甚至是标点符号。Token 的长度和定义取决于模型使用的 tokenizer。在英语中,单词之间的空格使分词更容易,但在中文中,单词没有固有的空格,分词化更复杂。现代大型模型使用子词标记化,将单词分解为较小的单元,如前缀、后缀或常见组合,以平衡语言表达性和模型效率。这允许模型通过这些子词之间的关系来理解单词的含义。例如,“unhappiness”这个词可以分解成“un”、“happy”和“ness”,即使模型以前没有见过这个词,它也可以根据子词推断出含义。总之,Tokens 对于大型模型理解和处理人类语言至关重要,它将复杂的语言信息转换为模型可以处理的标准化单元。 | |
|
_2025-06-05_14:43:03_ | 2025-06-05 14:43:03 | High RISC, High Reward_ RISC-V at 15 – RISC-V International | 原文链接失效了?试试备份 | TAGs:处理器 risc-v | Summary: RISC-V is an open-source instruction set architecture (ISA) that was developed at the University of California, Berkeley, starting in 2010. The team, led by Krste Asanović and Andrew Waterman, aimed to create a clean slate for compute architecture, free from the limitations of existing ISAs. They wanted to build a flexible, extensible, and easily customizable ISA that could meet the demands of specialized, customizable, and parallel computing. | |
_2025-06-05_13:21:48_ | 2025-06-05 13:21:48 | 北京市产业地图 | 原文链接失效了?试试备份 | TAGs:生活 城建 | Summary: This text is about the Beijing Industrial Map, which includes policy documents and information about various industries and companies in Beijing. The industries mentioned include new one-handed information technology, new materials, and intelligent manufacturing. The salaries for each level of these industries are listed, ranging from 22,000 to 71,000 yuan per year. The text also provides contact information for the Beijing Development and Reform Commission and the Beijing Economic and Information Technology Commission.本文是关于北京工业地图的,其中包括有关北京各行业和公司的政策文件和信息。提到的行业包括新型单手信息技术、新材料和智能制造。列出了这些行业每个级别的工资,每年从 22,000 到 71,000 元不等。文本还提供了北京市发展和改革委员会和北京市经济和信息化委员会的联系信息。 | |
_2025-06-03_16:16:20_ | 2025-06-03 16:16:20 | calssion's blog | 原文链接失效了?试试备份 | TAGs:博客_论坛 编译器 | Summary: This text is a collection of blog posts about compiler optimization, specifically focusing on the topic of SSA (Static Single Assignment) format. The first post, published on May 6, 2025, provides an introduction to SSA and briefly discusses its construction and destruction. The second post, published on April 30, 2025, discusses simplifying compiler optimization with the SSA format while reading from "The Static Single Assignment Book." The third post, published on February 7, 2025, introduces flattening ASTs (Abstract Syntax Trees) to optimize compiler performance. The fourth post, published on June 2, 2024, discusses Hydra, a tool that generalizes missed opportunities for loop optimization in LLVM, improving code optimization by up to 75%. The fifth post, published on April 16, 2024, discusses accelerating incremental compilation with a study comparing it to the basic LLVM/Clang incremental compilation, resulting in an average speedup of 6.72%.本文是有关编译器优化的博客文章的集合,特别关注 SSA (Static Single Assignment) 格式的主题。第一篇文章发布于 2025 年 5 月 6 日,介绍了 SSA 并简要讨论了其构建和销毁。第二篇博文发布于 2025 年 4 月 30 日,在阅读“The Static Single Assignment Book”时,讨论了使用 SSA 格式简化编译器优化。第三篇博文发布于 2025 年 2 月 7 日,介绍了扁平化 AST(抽象语法树)以优化编译器性能。第四篇博文发布于 2024 年 6 月 2 日,讨论了 Hydra,该工具可推广 LLVM 中错过的循环优化机会,将代码优化提高多达 75%。第五篇博文发布于 2024 年 4 月 16 日,讨论了加速增量编译,并进行了一项研究,将其与基本的 LLVM/Clang 增量编译进行了比较,平均加速为 6.72%。 | |
_2025-06-03_16:07:18_ | 2025-06-03 16:07:18 | llvm-project_bolt at main · llvm_llvm-project | 原文链接失效了?试试备份 | TAGs:高性能 | Summary: This text describes GitHub's BOLT project, a post-link optimizer designed to speed up applications by optimizing code layout based on execution profile gathered by sampling profilers. The project is compatible with X86-64 and AArch64 ELF binaries, and requires unstripped symbol tables and relocations for maximum performance gains. BOLT disassembles functions and reconstructs the control flow graph before optimizations, relying on heuristics to accomplish this task. The project is heavily based on LLVM libraries and can be built manually or using a docker image. Users can improve BOLT's performance by linking against memory allocation libraries with good concurrency support. The text also provides instructions on how to use BOLT with different types of executables and services. BOLT is licensed under the Apache License v2.0 with LLVM Exceptions.本文介绍了 GitHub 的 BOLT 项目,这是一个链接后优化器,旨在通过根据采样分析器收集的执行配置文件优化代码布局来加速应用程序。该项目与 X86-64 和 AArch64 ELF 二进制文件兼容,并且需要未剥离的符号表和重定位,以实现最大的性能增益。BOLT 在优化之前反汇编函数并重建控制流图,依靠启发式来完成此任务。该项目在很大程度上基于 LLVM 库,可以手动构建或使用 docker 镜像构建。用户可以通过链接具有良好并发支持的内存分配库来提高 BOLT 的性能。该文本还提供了有关如何将 BOLT 与不同类型的可执行文件和服务一起使用的说明。BOLT 根据 Apache 许可证 v2.0 获得许可,但存在 LLVM 例外。 | |
_2025-06-03_15:08:38_ | 2025-06-03 15:08:38 | BOLT_ 链接后优化技术简介 - 知乎 | 原文链接失效了?试试备份 | TAGs:高性能 | Summary: BOLT is a Facebook-developed optimization tool that enhances performance by optimizing code layout in binary files. It is a post-link optimizer that uses sampling-based profile information to enhance the performance of binary files that have already undergone feedback-driven optimization (FDO) and link-time optimization (LTO). BOLT is particularly beneficial for data center applications, where improving code locality is crucial for performance optimization. Unlike FDO, which can be complex and resource-intensive due to its instrumentation-based approach, BOLT uses sampling-based PGO and reduces the complexity and resource requirements. It also operates on the binary level, allowing for simpler optimization compared to PGO's machine code generation before the optimization phase and LTO. By optimizing code layout, BOLT can improve the performance of GCC and Clang compilers by up to 20.4%. It can also be used to optimize third-party libraries without their source code. BOLT is now part of the LLVM repository and provides official documentation for its use. BOLT is essential for data center applications as it improves code locality, which is crucial for performance optimization. FDO, also known as profile-guided optimization, can be complex and resource-intensive due to its instrumentation-based approach. BOLT uses sampling-based PGO, which reduces complexity and resource requirements while also improving accuracy and matching. It operates on the binary level, allowing for simpler optimization and higher accuracy compared to PGO's machine code generation before the optimization phase and LTO. BOLT can skip functions it cannot handle, such as complex, unsupported functions or those without a clear control-flow graph. It may also increase code size due to the addition of jump execution instructions for cold paths, which can be effective in improving memory usage. BOLT uses LLVM to handle reverse engineering, construct control-flow graphs, modify binary files, and analyze function and internal symbol references. It also uses profile information for data flow analysis. Besides optimizing code layout, BOLT has other passes, such as the strip-rep-ret pass, ICF, ICP, simplify-ro-loads, reorder-bbs, and reorder-functions. These passes can optimize code layout, improve I-TLB and I-cache performance, and simplify code. However, BOLT has some limitations, such as its inability to effectively utilize distributed compile systems and the need for significant resources to optimize large code segments. To address these limitations, there are parallel optimization tools like Lightning BOLT and Propeller, which can perform optimization in parallel and optimize entire programs based on profile information.BOLT 是 Facebook 开发的优化工具,它通过优化二进制文件中的代码布局来提高性能。它是一个链接后优化器,它使用基于采样的配置文件信息来增强已经经过反馈驱动优化 (FDO) 和链接时优化 (LTO) 的二进制文件的性能。BOLT 对数据中心应用程序特别有益,在这些应用程序中,改进代码局部性对于性能优化至关重要。与 FDO 不同,FDO 由于其基于仪器的方法而可能非常复杂且需要大量资源,而 BOLT 使用基于采样的 PGO 并降低了复杂性和资源要求。它还在二进制级别运行,与优化阶段和 LTO 之前的 PGO 机器代码生成相比,优化更简单。通过优化代码布局,BOLT 可以将 GCC 和 Clang 编译器的性能提高多达 20.4%。它还可用于优化第三方库,而无需其源代码。BOLT 现在是 LLVM 存储库的一部分,并提供其使用的官方文档。BOLT 对于数据中心应用程序至关重要,因为它可以提高代码局部性,这对于性能优化至关重要。FDO 也称为按配置优化,由于其基于插桩的方法,它可能很复杂且需要大量资源。BOLT 使用基于采样的 PGO,这降低了复杂性和资源需求,同时还提高了准确性和匹配性。它在二进制级别运行,与 PGO 在优化阶段和 LTO 之前生成机器代码相比,优化更简单,精度更高。BOLT 可以跳过它无法处理的函数,例如复杂、不受支持的函数或没有清晰控制流图的函数。 由于为冷路径添加了跳转执行指令,它还可能增加代码大小,这可以有效提高内存使用率。BOLT 使用 LLVM 处理逆向工程、构建控制流图、修改二进制文件以及分析函数和内部符号引用。它还使用用户档案信息进行数据流分析。除了优化代码布局外,BOLT 还有其他传递,例如 strip-rep-ret 传递、ICF、ICP、simplify-ro-loads、reorder-bbs 和 reorder-functions。这些通道可以优化代码布局,提高 I-TLB 和 I-cache 性能,并简化代码。但是,BOLT 有一些局限性,例如它无法有效利用分布式编译系统,并且需要大量资源来优化大型代码段。为了解决这些限制,Lightning BOLT 和 Propeller 等并行优化工具可以并行执行优化并根据配置文件信息优化整个程序。 | |
|
_2025-05-30_16:09:15_ | 2025-05-30 16:09:15 | openEuler sysboost 助力数据库性能优化技术内幕 | 原文链接失效了?试试备份 | TAGs:高性能 | Summary: openEuler, a Chinese community, introduced sysboost performance optimization technology in version 22.03 LTS to enhance database performance with easy-to-use and generalizable solutions. This article explains the basic implementation principles of sysboost, which optimizes startup processes while reducing memory usage and provides automatic feedback to improve performance in specific scenarios. The technology has shown a 16.68% improvement in MySQL's TPCC scenario and has also boosted performance in nginx, memcached, and other non-database applications. | |
_2025-05-30_15:05:50_ | 2025-05-30 15:05:50 | Notes about iommu=pt kernel parameter - L | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 IO IOMMU | Summary: This article discusses the iommu=pt kernel parameter, specifically the pt option, when using KVM pass-thru devices. The pt option enables IOMMU translation only for devices used in pass-thru, improving performance for host PCIe devices. The code analysis reveals that when iommu_pass_through is set, identity mapping is configured, and if the hardware supports pass-through translation type, no iova-hpa mapping is required. However, when the hardware doesn't support pass-through translation type, iova-hpa mapping is necessary, leading to reduced performance.本文讨论了使用 KVM 直通设备时的 iommu=pt 内核参数,特别是 pt 选项。pt 选项仅对用于直通的设备启用 IOMMU 转换,从而提高主机 PCIe 设备的性能。代码分析显示,设置 iommu_pass_through 时,会配置身份映射,如果硬件支持直通转换类型,则不需要 iova-hpa 映射。但是,当硬件不支持直通转换类型时,需要 iova-hpa 映射,从而导致性能降低。 | |
_2025-05-28_16:03:32_ | 2025-05-28 16:03:32 | 大厂自研白盒交换机:是技术控的倔强,还是钱包君的呼救? - 知乎 | 原文链接失效了?试试备份 | TAGs:网络 白盒 | Summary: This text is about the benefits of white box switches, specifically in the context of data centers, compared to traditional branded switches. White box switches offer more openness and flexibility, solving various pain points in the operational and management aspects. Companies like Alibaba, ByteDance, and Tencent have explored and implemented white box switches based on open source software like SONiC. The advantages of white box switches have led to increased collaboration between manufacturers and companies, ranging from OEM to JDM collaboration models. The article also mentions specific examples of white box switches developed by Alibaba (Tigatron) and Tencent (TCSOS).本文介绍了与传统品牌交换机相比,白盒交换机的优势,特别是在数据中心环境中。白盒交换机提供更多的开放性和灵活性,解决了运维和管理方面的各种痛点。阿里巴巴、字节跳动和腾讯等公司已经探索并实施了基于 SONiC 等开源软件的白盒切换。白盒交换机的优势导致制造商和公司之间的协作增加,从 OEM 到 JDM 协作模式。文章还提到了阿里巴巴 (Tigatron) 和腾讯 (TCSOS) 开发的白盒交换机的具体示例。 | |
_2025-05-28_15:22:43_ | 2025-05-28 15:22:43 | SDNLAB _ 专注网络创新技术 | 原文链接失效了?试试备份 | TAGs:网络 | Summary: This text is from the homepage of SDNLAB, a platform focused on network innovation technology. The page suggests using Google or Firefox browsers for account security and better product experience. The main features of the platform include enterprise+ activities, a future networking academy, experiment platform, optimization, topics, resources, books, product library, and a discussion forum. The content includes articles and news on SDN, SD-WAN, DPU, NFV, Cloud Edge Computing, 5G, IoT, AI, and Network Security. The articles cover topics such as GPU Direct RDMA technology, the cost and performance of AI superclusters, the evolution of AI network interconnects, and the latest developments in these technologies.本文来自 SDNLAB 的主页,SDNLAB 是一个专注于网络创新技术的平台。该页面建议使用 Google 或 Firefox 浏览器来确保帐户安全和更好的产品体验。该平台的主要功能包括 enterprise+ 活动、未来网络学院、实验平台、优化、主题、资源、书籍、产品库和论坛。内容包括有关 SDN、SD-WAN、DPU、NFV、云边缘计算、5G、IoT、AI 和网络安全的文章和新闻。这些文章涵盖的主题包括 GPU Direct RDMA 技术、AI 超级集群的成本和性能、AI 网络互连的演变以及这些技术的最新发展。 | |
_2025-05-28_15:16:14_ | 2025-05-28 15:16:14 | 蓝芯算力 _ 项目信息-36氪 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 企业 | Summary: Established in May 2023, this company focuses on designing high-performance chipsets and specializes in the research and design of RISC-V architecture server CPUs. They offer self-controlled solutions for data centers, cloud computing, enterprise applications (including finance, securities, insurance, telecommunications, etc.), AI and big data, big models, and other applications in China. The core team and R&D staff have extensive experience working in international tech giants (Intel, Qualcomm, etc.) and a track record of delivering multiple CPU products. They have signed long-term strategic partnerships with renowned RISC-V research institutions and domestic server product manufacturers and are currently pushing forward with CPU product development according to plan.该公司成立于 2023 年 5 月,专注于设计高性能芯片组,专门从事 RISC-V 架构服务器 CPU 的研究和设计。他们为中国的数据中心、云计算、企业应用(包括金融、证券、保险、电信等)、人工智能和大数据、大数据和其他应用提供自主解决方案。核心团队和研发人员在国际科技巨头(英特尔、高通等)有丰富的工作经验,并有交付多种 CPU 产品的记录。他们与著名的 RISC-V 研究机构和国内服务器产品制造商签署了长期战略合作伙伴关系,目前正在按计划推进 CPU 产品开发。 | |
_2025-05-28_14:53:14_ | 2025-05-28 14:53:14 | 46 页 PPT 深入了解白盒交换机!- SDNLAB | 原文链接失效了?试试备份 | TAGs:网络 白盒 | Summary: Based on the context, a white box switch is a network device that separates software and hardware with an open architecture using commercial hardware and an open operating system. It offers flexible configuration management and breaks the traditional "software-hardware binding" model, granting users the ability to customize network functions on demand.根据上下文,白盒交换机是一种网络设备,它使用商业硬件和开放式作系统,通过开放式架构将软件和硬件分开。它提供灵活的配置管理,打破了传统的“软硬件绑定”模式,使用户能够按需定制网络功能。 | |
_2025-05-28_14:22:49_ | 2025-05-28 14:22:49 | 宇树发布Unitree Pump健身泵,再次新创全球全新行业! - 宇树科技 | 原文链接失效了?试试备份 | TAGs:生活 运动 | Summary: A company named Unitree has released a new fitness pump, marking the creation of a new industry on a global scale. The Unitree Pump is the latest product from the company.宇树 Unitree 发布了一款新的健身泵,标志着一个新行业的诞生在全球范围内。Unitree Pump 是该公司的最新产品。 | |
_2025-05-28_12:05:24_ | 2025-05-28 12:05:24 | 打破国外厂商垄断,这几家车规电机驱动芯片厂商开始实现替代-电子工程专辑 | 原文链接失效了?试试备份 | TAGs:处理器 车 | Summary: This text is about the development and competition in the market for car rule electric motor drive chips, which are essential components for electric vehicles. The text discusses how China's domestic companies are starting to challenge the dominance of foreign companies in this field, with companies like PeakPlus Technology, Mustek, and BYD Semiconductor emerging as significant players. The text also mentions the importance of these chips in electric vehicles, as they determine key performance indicators such as hill-climbing ability, acceleration, and maximum speed. The text concludes by mentioning the future direction of car rule electric motor drive chips, including the use of self-developed cores, algorithm hardwareization, and high integration.本文是关于 car rule 电动机驱动芯片市场的发展和竞争,这些芯片是电动汽车的重要组成部分。本文讨论了中国国内公司如何开始挑战外国公司在该领域的主导地位,PeakPlus Technology、Mustek 和 BYD Semiconductor 等公司成为重要参与者。文中还提到了这些芯片在电动汽车中的重要性,因为它们决定了爬坡能力、加速度和最大速度等关键性能指标。文最后提到了 Car Rule 电机驱动芯片的未来方向,包括使用自研内核、算法硬件化、高集成度。 | |
_2025-05-27_14:56:03_ | 2025-05-27 14:56:03 | tech-fast-int@lists.riscv.org _ Home | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 中断 | Summary: The Fast Interrupt Task Group aims to create a low-latency, vectored, priority-based, preemptive interrupt scheme for a single RISC-V Hart. This design adheres to RISC-V standards and includes both hardware specifications and software Application Binary Interfaces (ABIs)/Application Programming Interfaces (APIs). Compiler conventions for annotating interrupt handler functions will also be standardized.Fast Interrupt Task Group 旨在为单个 RISC-V Hart 创建一个低延迟、矢量化、基于优先级的抢占式中断方案。此设计符合 RISC-V 标准,包括硬件规范和软件应用程序二进制接口 (ABI)/应用程序编程接口 (API)。用于注释中断处理程序函数的编译器约定也将标准化。 | |
_2025-05-27_11:13:30_ | 2025-05-27 11:13:30 | RISC-V Summit Europe 2025 - Welcome | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 会议 | Summary: The RISC-V Summit Europe is a premier event connecting European industry, government, research, academia, and ecosystem support to build the future of innovation on RISC-V. RISC-V is an open standard instruction set architecture (ISA) that has gained significant success in Europe, with one-third of its global community based in the region. The summit takes place in Paris from May 12-15, 2025, and aims to help attendees explore both commercial and research applications. The event features keynotes, invited talks, and sessions on topics such as high-performance RISC-V systems, open chiplet architecture, and RISC-V in space computing.RISC-V 欧洲峰会是连接欧洲工业、政府、研究、学术界和生态系统支持的首要活动,旨在构建 RISC-V 创新的未来。RISC-V 是一种开放标准指令集架构 (ISA),在欧洲取得了重大成功,其全球社区的三分之一位于该地区。该峰会将于 2025 年 5 月 12 日至 15 日在巴黎举行,旨在帮助与会者探索商业和研究应用。该活动包括主题演讲、特邀报告和会议,主题包括高性能 RISC-V 系统、开放式小芯片架构和空间计算中的 RISC-V。 | |
_2025-05-23_16:55:05_ | 2025-05-23 16:55:05 | PCIe的流量控制 - 知乎 | 原文链接失效了?试试备份 | TAGs:设备与驱动 PCI_PCIe 流控 | Summary: PCIe's Flow Control is a mechanism in the Data Link Layer that ensures the receiving Transaction Layer has the capacity to receive Transmission Line Packets (TLPs). The Flow Control process involves regular exchange of credit information between the Data Link Layers of the communicating devices. Each device's Buffer capacity is represented in credits, with one credit equating to 4DW of TLP Payload or one TLP Header. The devices exchange Flow Control DLLPs, which carry the latest credit information, to manage the flow of TLPs towards the Data Link Layer. The interval between these credit updates is crucial, as a long interval may leave the buffer empty but prevent the sending of new TLPs, while a short interval may consume significant bandwidth and reduce overall utilization.PCIe 的流控制是数据链路层中的一种机制,可确保接收事务层具有接收传输线数据包 (TLP) 的能力。流控制过程涉及通信设备的数据链路层之间的定期信用信息交换。每个设备的缓冲区容量以积分表示,一个积分等于 4DW 的 TLP 有效载荷或一个 TLP 标头。这些设备交换流控制 DLLP,这些 DLLP 携带最新的信用信息,以管理 TLP 流向数据链路层。这些信用更新之间的间隔至关重要,因为较长的间隔可能会使缓冲区为空,但会阻止发送新的 TLP,而较短的间隔可能会消耗大量带宽并降低整体利用率。 | |
_2025-05-23_16:12:49_ | 2025-05-23 16:12:49 | 进迭时空RISC-V Vector技术实践 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA Vector | Summary: This text discusses the benefits of RISC-V Vector, a more flexible programming model compared to traditional SIMD instructions, in improving the decoupling between software and hardware. RISC-V Vector supports single instruction multiple data parallel processing while providing higher-level abstractions for developers. The text uses RISC-V Vector 1.0 as an example, explaining how it allows a single program to run on hardware with different vector register widths, and how the element mask function handles excess elements without requiring special handling like in SIMD instructions. The text also mentions that the first-generation RISC-V CPUs, X60 and A60, support RISC-V Vector 1.0 and provide significant performance improvements in various tests compared to Cortex-A55's SIMD instructions.本文讨论了 RISC-V Vector(与传统 SIMD 指令相比,RISC-V Vector)是一种更灵活的编程模型,在改善软件和硬件之间的解耦方面的优势。RISC-V Vector 支持单指令多数据并行处理,同时为开发人员提供更高级别的抽象。本文以 RISC-V Vector 1.0 为例,解释了它如何允许单个程序在具有不同矢量寄存器宽度的硬件上运行,以及元素掩码函数如何处理多余的元素,而无需像 SIMD 指令那样进行特殊处理。文中还提到,第一代 RISC-V CPU X60 和 A60 支持 RISC-V Vector 1.0,与 Cortex-A55 的 SIMD 指令相比,在各种测试中提供了显著的性能改进。 | |
_2025-05-23_15:57:08_ | 2025-05-23 15:57:08 | 11. TPH Support — The Linux Kernel documentation | 原文链接失效了?试试备份 | TAGs:设备与驱动 PCI_PCIe TPH | Summary: The Linux Kernel's 6.15.0-rc7 documentation introduces TPH (TLP Processing Hints), a PCIe feature that enables endpoint devices to provide optimization hints for memory space requests. These hints, called Steering Tags (STs), are embedded in the requester's TLP headers, allowing the system hardware to manage resources more efficiently. To use TPH, the Linux kernel must be built with the CONFIG_PCIE_TPH option, and the driver must enable and manage TPH support using provided APIs. The driver can retrieve and write Steering Tags for target memories associated with specific CPUs. TPH is optional and can be enabled or disabled system-wide using the "notph" kernel command line option.Linux 内核的 6.15.0-rc7 文档介绍了 TPH(TLP 处理提示),这是一项 PCIe 功能,使端点设备能够为内存空间请求提供优化提示。这些提示称为转向标签 (ST),嵌入在请求者的 TLP 标头中,使系统硬件能够更有效地管理资源。要使用 TPH,必须使用 CONFIG_PCIE_TPH 选项构建 Linux 内核,并且驱动程序必须使用提供的 API 启用和管理 TPH 支持。驾驶员可以检索和写入与特定 CPU 关联的目标内存的转向标签。TPH 是可选的,可以使用 “notph” 内核命令行选项在系统范围内启用或禁用。 | |
|
|
_2025-05-23_14:55:16_ | 2025-05-23 14:55:16 | 32. Intel Virtual Function Driver — Data Plane Development Kit 25.03.0 documentation | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 IO SR-IOV | Summary: This text describes the usage of Intel Network Interface Controller Drivers in a data plane development kit (DPDK) environment for virtualized environments. The text covers two modes of operation for Intel Ethernet Controllers in a virtualized environment: SR-IOV mode and VMDq mode. SR-IOV mode involves direct assignment of part of the port resources to different guest operating systems using the PCI-SIG Single Root I/O Virtualization (SR IOV) standard. VMDq mode involves central management of the networking resources by an IO Virtual Machine (IOVM) or a Virtual Machine Monitor (VMM). | |
_2025-05-23_11:57:30_ | 2025-05-23 11:57:30 | Intel® Data Direct I_O Technology | 原文链接失效了?试试备份 | TAGs:设备与驱动 PCI_PCIe cache direct injection | Summary: Intel Data Direct I/O Technology (Intel DDIO) is a feature introduced with the Intel Xeon processor E5 and E7 v2 families that allows Intel Ethernet Controllers and adapters to communicate directly with the processor cache, increasing bandwidth, reducing latency, and decreasing power consumption. This is achieved by making the processor cache the primary destination and source of I/O data instead of the traditional method of going through main memory first. Intel DDIO is enabled by default on all Intel Xeon processor E5 and E7 v2 family platforms and benefits all I/O devices, including Ethernet, InfiniBand, Fibre Channel, and RAID. The technology provides significant performance benefits for I/O-bound workloads and reduced power consumption for non-IO-bound workloads. Intel Ethernet products, with their high-performing, stateless architecture, take advantage of the improvements in communication between host and network controller that Intel DDIO provides.英特尔 Data Direct I/O 技术(英特尔 DDIO)是英特尔至强处理器 E5 和 E7 v2 系列中引入的一项功能,允许英特尔以太网控制器和适配器直接与处理器缓存通信,从而增加带宽、减少延迟并降低功耗。这是通过使处理器高速缓存成为 I/O 数据的主要目标和来源来实现的,而不是首先通过主内存的传统方法。英特尔 DDIO 在所有英特尔至强处理器 E5 和 E7 v2 系列平台上默认启用,并受益于所有 I/O 设备,包括以太网、InfiniBand、光纤通道和 RAID。该技术为 I/O 密集型工作负载提供了显著的性能优势,并降低了非 IO 密集型工作负载的功耗。英特尔以太网产品具有高性能、无状态架构,利用了英特尔 DDIO 提供的主机和网络控制器之间通信的改进。 | |
_2025-05-23_11:54:23_ | 2025-05-23 11:54:23 | TPH and cache direct injection support [LWN.net] | 原文链接失效了?试试备份 | TAGs:设备与驱动 PCI_PCIe cache direct injection | Summary: A patch series is proposed for adding TPH (TLP Processing Hints) support in Linux, which is a PCIe feature that allows endpoint devices to provide optimization hints for memory space requests. The new Cache Injection feature leverages TPH and allows PCIe endpoints to inject I/O Coherent DMA writes directly into an L2 cache. This results in memory bandwidth savings and better network performance for applications requiring high performance and low latency, such as networking and storage applications. The patch series includes changes to various Linux kernel files and drivers, specifically the Broadcom BNXT driver, and addresses compilation warnings and errors.提出了一个补丁系列,用于在 Linux 中添加 TPH(TLP 处理提示)支持,这是一项 PCIe 功能,允许端点设备为内存空间请求提供优化提示。新的高速缓存注入功能利用 TPH,并允许 PCIe 端点将 I/O 一致性 DMA 写入直接注入 L2 高速缓存。这可为需要高性能和低延迟的应用程序(如网络和存储应用程序)节省内存带宽并提高网络性能。此补丁系列包括对各种 Linux 内核文件和驱动程序(特别是 Broadcom BNXT 驱动程序)的更改,并解决了编译警告和错误。 | |
_2025-05-22_19:48:10_ | 2025-05-22 19:48:10 | 万字长文:官方解读RISC-V | 原文链接失效了?试试备份 | TAGs:处理器 risc-v | Summary: This text summarizes the story of the RISC-V processor architecture and its development over the past 15 years. The article begins with an email sent by a student, Andrew Waterman, to his professors in 2010, expressing his belief that they should revive the DEC Alpha microprocessor architecture. However, his professors, including Krste Asanović, saw the need for a new ISA due to the limitations of existing ISAs and the demands of Moore's Law and Dennard scaling. | |
_2025-05-19_19:09:58_ | 2025-05-19 19:09:58 | 基于硬件仿真加速器的PCIe接口验证方法探究和实现 | 原文链接失效了?试试备份 | TAGs:处理器 验证 | Summary: The Cadence Palladium Z1 hardware simulator can reach a maximum frequency of 4MHz, but it doesn't meet the requirements of the PCIe interface. To address this issue, the Palladium platform offers a solution by using SpeedBridge for rate adaptation on both ends.Cadence Palladium Z1 硬件模拟器可以达到 4MHz 的最大频率,但不符合 PCIe 接口的要求。为了解决这个问题,Palladium 平台提供了一种解决方案,使用 SpeedBridge 在两端进行速率自适应。 | |
_2025-05-19_16:07:09_ | 2025-05-19 16:07:09 | Knut's QEMU patchwork · knuto_qemu Wiki | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 IO SR-IOV | Summary: A user has implemented SR/IOV emulation patches in QEMU, which are now part of the official QEMU project from version 7.1.0. The patches allow for the emulation of SR/IOV in a virtual machine, but lack a fully implemented example device. An example of using the patches is provided for an NVME Express device, which can be tested with an igb ethernet device. The igb device can be tested by starting QEMU with specific parameters and observing the number of devices detected in the system. However, the igb device is not fully functional and should be considered as a demonstration of the SR/IOV patch set rather than a working device.用户在 QEMU 中实施了 SR/IOV 仿真补丁,这些补丁现在是 7.1.0 版官方 QEMU 项目的一部分。这些补丁允许在虚拟机中模拟 SR/IOV,但缺少完全实现的示例设备。为 NVME Express 设备提供了一个使用补丁的示例,该设备可以使用 igb 以太网设备进行测试。可以通过使用特定参数启动 QEMU 并观察系统中检测到的设备数量来测试 igb 设备。但是,igb 设备功能不全,应被视为 SR/IOV 补丁集的演示,而不是工作设备。 | |
_2025-05-18_16:19:47_ | 2025-05-18 16:19:47 | 打破DPDK的误区: 数据面最流行的工具包DPDK,前世今生,未来 - 知乎 | 原文链接失效了?试试备份 | TAGs:网络 DPDK | Summary: DPDK, or Data Plane Development Kit, is an open-source project under the Linux Foundation that provides user-space libraries and drivers to accelerate data packet processing workloads on various major CPU architectures. It was initially created by Intel around ten years ago and is now one of the Linux Foundation's open-source projects. From enterprise data centers to public clouds, DPDK has significantly contributed to driving the use of high-performance general-purpose CPUs in networking. | |
|
_2025-05-18_14:09:58_ | 2025-05-18 14:09:58 | SoC 越复杂,NoC 越关键! | 原文链接失效了?试试备份 | TAGs:处理器 总线 | Summary: The complexity of System-on-Chips (SoC) is increasing exponentially, making Network-on-Chips (NoC) increasingly crucial for efficient and scalable data transfer and communication within the chip. As AI, high-performance computing (HPC), and other data-intensive applications continue to evolve, advanced NoC solutions are required to address the challenges of large-scale SoC design. Despite the opportunities brought by these technological advancements, there are significant challenges for SoC designers, including rapidly expanding architectures, tight deadlines, scarce professional talent, low resource utilization, and fragmented design toolchains. The text discusses the growing complexity of SoCs, the importance of NoCs, and the challenges faced by SoC designers in the era of big chiplets.片上系统 (SoC) 的复杂性呈指数级增长,这使得片上网络 (NoC) 对于芯片内高效且可扩展的数据传输和通信越来越重要。随着 AI、高性能计算 (HPC) 和其他数据密集型应用的不断发展,需要先进的 NoC 解决方案来应对大规模 SoC 设计的挑战。尽管这些技术进步带来了机遇,但 SoC 设计人员面临着重大挑战,包括快速扩展的架构、紧迫的期限、稀缺的专业人才、低资源利用率和碎片化的设计工具链。本文讨论了 SoC 日益增长的复杂性、NoC 的重要性以及 SoC 设计人员在小芯片时代面临的挑战。 | |
|
_2025-05-16_18:00:37_ | 2025-05-16 18:00:37 | 夏季天气炎热,很多商场、写字楼都集中供冷,居民楼有可能集中供冷吗,会比空调节约成本吗? - 知乎 | 原文链接失效了?试试备份 | TAGs:生活 房 | Summary: The input discusses the topic of centralized cooling systems, specifically in the context of residential buildings in China. The author shares their personal experience of living in a building with centralized cooling in the United States and expresses their belief that centralized cooling is a viable solution for saving energy and reducing noise. They also mention the potential benefits of centralized cooling, such as increased efficiency, reduced outdoor noise pollution, and the ability to cover larger areas with consistent temperatures. However, they acknowledge that the implementation of centralized cooling systems in China faces challenges, including high initial investment costs and resistance from residents. The author also mentions the importance of designing the system efficiently and utilizing natural cooling sources to make centralized cooling a cost-effective solution.该意见讨论了集中供冷系统的主题,特别是在中国的住宅建筑中。作者分享了他们在美国集中供冷的建筑物中生活的个人经历,并表达了他们认为集中供冷是节能和降低噪音的可行解决方案。他们还提到了集中冷却的潜在好处,例如提高效率、减少室外噪音污染以及能够以一致的温度覆盖更大的区域。然而,他们承认,在中国实施集中供冷系统面临挑战,包括高昂的初始投资成本和居民的抵制。作者还提到了有效设计系统并利用自然冷却源使集中冷却成为经济高效的解决方案的重要性。 | |
_2025-05-16_17:11:25_ | 2025-05-16 17:11:25 | 技术概述 – OpenFastPath --- Technical Overview – OpenFastPath | 原文链接失效了?试试备份 | TAGs:网络 OpenFastPath | Summary: The OpenFastPath (OFP) project is an open-source accelerated routing and forwarding solution for IPv4 and IPv6, designed for use in data center environments. It provides fast path processing for UDP, TCP, and ICMP, as well as support for various protocols and tunneling methods. The architecture includes an OFP library, user application code, Linux host system, and network interfaces. The OFP system consists of a main thread, dispatcher, and OFP multicore system view. Main features include fast path protocols processing, IPv4 and IPv6 forwarding and routing, and integration with the Linux slow path IP stack. OFP contains a command line interface for configuration and debugging, and offers optimized zero-copy APIs for single thread run-to-completion environments. Limitations include functional but not performance-optimized TCP implementation. For further reading, check out the project documentation on GitHub.OpenFastPath (OFP) 项目是适用于 IPv4 和 IPv6 的开源加速路由和转发解决方案,专为在数据中心环境中使用而设计。它为 UDP、TCP 和 ICMP 提供快速路径处理,并支持各种协议和隧道方法。该架构包括 OFP 库、用户应用程序代码、Linux 主机系统和网络接口。OFP 系统由主线程、分派器和 OFP 多核系统视图组成。主要功能包括快速路径协议处理、IPv4 和 IPv6 转发和路由,以及与 Linux 慢速路径 IP 堆栈的集成。OFP 包含用于配置和调试的命令行界面,并为单线程运行到完成环境提供优化的零拷贝 API。限制包括功能上未优化性能的 TCP 实现。如需进一步阅读,请查看 GitHub 上的项目文档。 | |
_2025-05-15_15:08:26_ | 2025-05-15 15:08:26 | BlueField Supported Interfaces - NVIDIA Docs | 原文链接失效了?试试备份 | TAGs:DPU DPU 介绍 | Summary: NVIDIA BlueField-3 is a System-on-Chip (SoC) that integrates a 64-bit Armv8.2+ A78 Hercules cores array, an NVIDIA ConnectX-7 network adapter front-end, and a PCI Express switch. It features an Armv multicore processor array for advanced application development and software ecosystem support. The ConnectX-7 network offload controller delivers high performance for networking and storage applications, with RDMA and RoCE technology, an embedded virtual switch with ACLs, and transport offloads for NVMe over Fabrics, VXLAN, and MPLS overlay protocols.NVIDIA BlueField-3 是一款系统级芯片 (SoC),集成了 64 位 Armv8.2+ A78 Hercules 内核阵列、NVIDIA ConnectX-7 网络适配器前端和 PCI Express 交换机。它具有 Armv 多核处理器阵列,用于高级应用程序开发和软件生态系统支持。ConnectX-7 网络卸载控制器采用 RDMA 和 RoCE 技术、带 ACL 的嵌入式虚拟交换机,以及 NVMe over Fabrics、VXLAN 和 MPLS 叠加协议的传输卸载,为网络和存储应用程序提供高性能。 | |
_2025-05-15_11:23:24_ | 2025-05-15 11:23:24 | 环形振荡器与CPU硬件随机数解析 - DeepSeek - 探索未至之境 | 原文链接失效了?试试备份 | TAGs:处理器 随机数 | Summary: A circular oscillator and a CPU hardware random number generator are closely related concepts in computer hardware design. The following is a detailed explanation and analysis of both: | |
_2025-05-15_11:21:02_ | 2025-05-15 11:21:02 | 文献翻译_Design of True Random Number Generator Based on Multi-stage Feedback Ring Oscillator 基于多级反馈环形振荡器的真随机数发生器设计 | 原文链接失效了?试试备份 | TAGs:处理器 随机数 | Summary: A new method for generating true random numbers on FPGAs using a multi-level feedback ring oscillator (MSFRO) as the entropy source is proposed in this article. By adding a multi-level feedback structure to traditional ring oscillators, the clock jitter range is expanded, increasing the clock sampling frequency and entropy source randomness. Unlike traditional clock sampling structures, the clock jitter signal generated by MSFRO is used to sample the clock signal from the FPGA's phase-locked loop (PLL). The output values are then XORed to reduce bias and improve randomness. This TRNG was implemented on an Xilinx Virtex-6 FPGA, with low hardware resource consumption and high throughput. Comparisons of entropy sources, hardware resources, and throughput with existing TRNGs showed that the proposed design uses only 24 LUTs and 2 DFFs. Compared to other TRNGs, this design has very low hardware resource usage and achieves a throughput of 290 Mbps. The generated random bit sequence passed NIST SP800-22 and NIST SP80090B tests.本文提出了一种使用多级反馈环形振荡器 (MSFRO) 作为熵源在 FPGA 上生成真随机数的新方法。通过在传统环形振荡器上增加多级反馈结构,扩大了时钟抖动范围,提高了时钟采样频率和熵源随机性。与传统的 clock sampling 结构不同,MSFRO 生成的 clock jitter 信号用于从 FPGA 的锁相环 (PLL) 对 clock 信号进行采样。然后对输出值进行 XOR 运算以减少偏差并提高随机性。该 TRNG 在 Xilinx Virtex-6 FPGA 上实现,具有低硬件资源消耗和高吞吐量。将熵源、硬件资源和吞吐量与现有 TRNG 进行比较表明,所提出的设计仅使用 24 个 LUT 和 2 个 DFF。与其他 TRNG 相比,该设计的硬件资源使用率非常低,吞吐量为 290 Mbps。生成的随机位序列通过了 NIST SP800-22 和 NIST SP80090B 测试。 | |
|
|
_2025-05-14_16:34:38_ | 2025-05-14 16:34:38 | DPDK丢包那些事 - T-BARBARIANS - 博客园 | 原文链接失效了?试试备份 | TAGs:网络 DPDK | Summary: This text is a blog post about the author's experience with optimizing DPDK (Data Plane Development Kit) performance on both x86 and ARM platforms. The author discusses their efforts to reduce packet loss in DPDK, which they describe as a persistent issue despite the abundance of related articles. They share their experiences and insights gained during the optimization process, which included increasing ring buffer sizes, optimizing thread tasks, and managing mbuf (memory buffer) releases. The author emphasizes the importance of understanding the underlying causes of packet loss and the role of hardware in the process. They also mention the challenges they faced on the ARM platform and the importance of optimizing for specific hardware architectures. The post concludes with a reflection on the importance of persistence and determination in optimizing complex systems. The text is written in a conversational style and includes code snippets and diagrams to illustrate key concepts. The author's goal is to provide insights and solutions for developers facing similar challenges in their own projects.本文是一篇博客文章,介绍了作者在 x86 和 ARM 平台上优化 DPDK(数据平面开发工具包)性能的经验。作者讨论了他们为减少 DPDK 中的数据包丢失所做的努力,尽管相关文章很多,但他们将其描述为一个持续存在的问题。他们分享了在优化过程中获得的经验和见解,包括增加环形缓冲区大小、优化线程任务和管理 mbuf(内存缓冲区)发布。作者强调了了解数据包丢失的根本原因以及硬件在此过程中的作用的重要性。他们还提到了他们在 ARM 平台上面临的挑战以及针对特定硬件架构进行优化的重要性。本文最后反思了持久性和决心在优化复杂系统中的重要性。文本以对话风格编写,包括代码片段和图表以说明关键概念。作者的目标是为开发人员在自己的项目中面临类似挑战提供见解和解决方案。 | |
_2025-05-14_12:01:56_ | 2025-05-14 12:01:56 | 《考察报告》连载九|从SmartNIC到DPU、IPU - 知乎 | 原文链接失效了?试试备份 | TAGs:DPU DPU 介绍 | Summary: This text is about the evolution of data processing technology, from SmartNICs to DPUs and IPUs, focusing on distributed computing and the need for high-speed networking to connect distributed computing power. The text discusses how SmartNICs, which offload tasks from CPUs, have been replaced by DPUs and IPUs, which offer more control and acceleration at the processing level. The text mentions Intel's BlueField-3 DPU and Marvell's OCTEON 10 DPU as examples of these technologies. DPUs and IPUs differ from SmartNICs in that they have control planes and can be self-managed, allowing for more efficient and secure large-scale computing applications. The text also mentions Facebook's report on the benefits of IPUs in microservices architecture and the advantages of using IPUs instead of CPUs for infrastructure processing.本文介绍了数据处理技术的演变,从 SmartNIC 到 DPU 和 IPU,重点介绍分布式计算和连接分布式计算能力的高速网络需求。本文讨论了从 CPU 卸载任务的 SmartNIC 如何被 DPU 和 IPU 所取代,后者在处理级别提供更多控制和加速。文中提到了 Intel 的 BlueField-3 DPU 和 Marvell 的 OCTEON 10 DPU 作为这些技术的示例。DPU 和 IPU 与 SmartNIC 的不同之处在于,它们具有控制平面,并且可以自我管理,从而实现更高效、更安全的大规模计算应用程序。该文本还提到了 Facebook 关于 IPU 在微服务架构中的优势以及使用 IPU 而不是 CPU 进行基础设施处理的优势的报告。 | |
_2025-05-14_11:57:47_ | 2025-05-14 11:57:47 | 浅析SmartNIC:博通vs英特尔vs英伟达vs赛灵思 | 原文链接失效了?试试备份 | TAGs:DPU DPU 介绍 | Summary: This article discusses the difference between traditional Network Interface Cards (NICs) and SmartNICs. While NICs are built around ASICs designed as ethernet controllers, SmartNICs are defined as network cards that allow software to be loaded onto the NIC after purchase to add new features or support other functions. The main difference between a regular NIC and a SmartNIC is that the latter offloads processing from the host CPU and is designed around FPGA platforms. SmartNICs require additional computational power and onboard memory, which regular NICs lack. The article compares the SmartNIC offerings of six companies: Broadcom (Brocade), Intel, Nvidia (Mellanox), Xilinx (Xilinx), Netronome, and Pensando. Broadcom's Stingray SmartNIC uses a single chip method, while Intel's N3000 SmartNIC uses multiple chips. Xilinx's Alveo U25 uses a Zynq SoC, which includes an FPGA and an Arm CPU. Pensando's DSC-25 uses a Capri processor with parallel P4 processing units. The article also discusses the potential benefits of SmartNICs, such as offloading network processing tasks from the host CPU and extending computing power to the network edge.本文讨论了传统网络接口卡 (NIC) 和 SmartNIC 之间的区别。NIC 是围绕设计为以太网控制器的 ASIC 构建的,而 SmartNIC 则被定义为允许在购买后将软件加载到 NIC 上以添加新功能或支持其他功能的网卡。常规 NIC 和 SmartNIC 之间的主要区别在于,后者从主机 CPU 卸载处理,并且是围绕 FPGA 平台设计的。SmartNIC 需要额外的计算能力和板载内存,而普通 NIC 则缺乏这些。本文比较了六家公司的 SmartNIC 产品:Broadcom (Brocade)、Intel、Nvidia (Mellanox)、Xilinx (Xilinx)、Netronome 和 Pensando。Broadcom 的 Stingray SmartNIC 使用单芯片方法,而 Intel 的 N3000 SmartNIC 使用多芯片。Xilinx 的 Alveo U25 使用 Zynq SoC,其中包括一个 FPGA 和一个 Arm CPU。Pensand 的 DSC-25 使用带有并行 P4 处理单元的 Capri 处理器。本文还讨论了 SmartNIC 的潜在优势,例如从主机 CPU 卸载网络处理任务,并将计算能力扩展到网络边缘。 | |
_2025-05-14_10:39:52_ | 2025-05-14 10:39:52 | riscv_ KVM_ Remove unnecessary vcpu kick · torvalds_linux@d252435 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 中断 | Summary: A GitHub page displays information about a commit in the Linux kernel project. The commit, made by Bill Xiang, removes an unnecessary vCPU kick in the riscv: KVM (Kernel-based Virtual Machine) code. The vCPU kick is no longer needed when writing to the vs\_file directly forwards an interrupt as an MSI to the vCPU. The commit also modifies the handling of guest external interrupts. The changes were reviewed by Andrew Jones and Radim Krčmář.GitHub 页面显示有关 Linux 内核项目中提交的信息。由 Bill Xiang 提交的提交删除了 riscv: KVM(基于内核的虚拟机)代码中不必要的 vCPU 踢出。写入 vs\_file 将中断作为 MSI 直接转发到 vCPU 时,不再需要 vCPU 踢出。该提交还修改了客户机外部中断的处理。Andrew Jones 和 Radim Krčmář 审查了这些更改。 | |
_2025-05-09_14:46:54_ | 2025-05-09 14:46:54 | sig-qemu@lists.riscv.org _ RVA23 profile support | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 qemu | Summary: A group discussion on the RVA23 profile support for QEMU is taking place on the sig-qemu list. The new RVA23 profile, which includes mandatory extensions Ss1p13, Zimop, Zcmop, Supm, Ssnpm, Shgatpa, Ssstateen, Shcounterenw, Shvstvala, Shtvala, Shvstvecd, and Shvsatpa, and optional extensions Zabha, Ziccamoc, Zama16b, Sdex, Ssstrict, Svvptc, and Sspm, is being discussed. The group members are encouraging each other to implement these extensions on QEMU and update the progress on the group.关于 RVA23 配置文件对 QEMU 支持的小组讨论正在 sig-qemu 列表中进行。新的 RVA23 配置文件,包括强制性扩展 Ss1p13、Zimop、Zcmop、Supm、Ssnpm、Shgatpa、Ssstateen、Shcounterenw、Shvstvala、Shtvala、Shvstvecd 和 Shvsatpa,以及可选扩展 Zabha、Ziccamoc、Zama16b、Sdex、Ssstrict、Svvptc 和 Sspm,正在讨论中。小组成员互相鼓励在 QEMU 上实施这些扩展,并更新小组的进展。 | |
|
_2025-05-07_14:01:58_ | 2025-05-07 14:01:58 | RVA23 Profile_ Unlocking new possibilities for RISC-V in high-performance, compute-intensive workloads | 原文链接失效了?试试备份 | TAGs:处理器 risc-v | Summary: The RVA23 Profile is a new standard for 64-bit RISC-V application processors, recently ratified by RISC-V International. It ensures software portability and compatibility across hardware implementations, making it ideal for compute-intensive applications, particularly in AI, machine learning, and enterprise-level tasks. The RVA23 Profile includes mandatory Vector and Hypervisor extensions, which enable efficient data processing and virtualization support, respectively. This standardization positions RISC-V as a viable choice for high-performance servers and other compute-heavy systems.RVA23 Profile 是 64 位 RISC-V 应用处理器的新标准,最近获得了 RISC-V International 的批准。它确保了软件的可移植性和跨硬件实施的兼容性,使其成为计算密集型应用程序的理想选择,尤其是在 AI、机器学习和企业级任务中。RVA23 配置文件包括强制性的矢量和虚拟机管理程序扩展,分别支持高效的数据处理和虚拟化支持。这种标准化使 RISC-V 成为高性能服务器和其他计算密集型系统的可行选择。 | |
_2025-04-29_15:53:50_ | 2025-04-29 15:53:50 | Welcome to Linux From Scratch! | 原文链接失效了?试试备份 | TAGs:操作系统 linux | Summary: The page provides options to read the Linux From Scratch (LFS) manual in various stable and development versions, each with their respective correction pages and security notices. Linux From Scratch is a well-known Linux distribution project where users build their systems from source code, providing a deeper understanding of the system and customization options.该页面提供了阅读各种稳定版和开发版的 Linux From Scratch (LFS) 手册的选项,每个版本都有各自的更正页面和安全通知。Linux From Scratch 是一个著名的 Linux 分发项目,用户可以从源代码构建他们的系统,从而更深入地了解系统和自定义选项。 | |
_2025-04-29_11:59:18_ | 2025-04-29 11:59:18 | iommu_riscv_ fix use after free of riscv_iommu_domain - Patchwork | 原文链接失效了?试试备份 | TAGs:处理器 risc-v IOMMU | Summary: This text is an email message about a patch for the RISC-V IOMMU driver in the Linux kernel. The patch addresses a use-after-free issue in the `riscv_iommu_bond_unlink` function. The issue occurs when the `riscv_iommu_domain` is freed but not set to NULL before being used in `riscv_iommu_attach_paging_domain` and `riscv_iommu_bond_unlink`. The patch sets `info->domain` to NULL within `riscv_iommu_bond_unlink` to resolve the issue. The email includes the patch diff and a commit message.此文本是有关 Linux 内核中 RISC-V IOMMU 驱动程序补丁的电子邮件。此补丁解决了“riscv_iommu_bond_unlink”函数中的释放后使用问题。当“riscv_iommu_domain”在“riscv_iommu_attach_paging_domain”和“riscv_iommu_bond_unlink”中使用之前被释放但未设置为 NULL 时,会出现此问题。该补丁将“riscv_iommu_bond_unlink”中的“info->domain”设置为 NULL 以解决此问题。该电子邮件包括 patch diff 和提交消息。 | |
_2025-04-28_20:19:09_ | 2025-04-28 20:19:09 | 向 Linux 内核社区提交 patch 实操要点 - 魅族内核团队 | 原文链接失效了?试试备份 | TAGs:操作系统 linux Contribute | Summary: This text outlines the steps to submit a patch to the Linux kernel community using Git and git send-emails. The process includes installing Git and git-send-emails, configuring Git and SMTP, downloading Linux kernel code, creating and formatting patches, checking patch format, and sending patches to the appropriate maintainers. The text also mentions the importance of following the community's guidelines, such as using bottom-posting in emails and avoiding top-posting. The text concludes by mentioning the importance of seeking feedback and learning from others in the Linux community.本文概述了使用 Git 和 git send-emails 向 Linux 内核社区提交补丁的步骤。该过程包括安装 Git 和 git-send-emails、配置 Git 和 SMTP、下载 Linux 内核代码、创建和格式化补丁、检查补丁格式以及将补丁发送给相应的维护人员。文本还提到了遵循社区准则的重要性,例如在电子邮件中使用 bottom-posting 和避免 top-posting。本文最后提到了寻求反馈和向 Linux 社区中的其他人学习的重要性。 | |
|
_2025-04-24_15:29:03_ | 2025-04-24 15:29:03 | An AnandTech Interview with Jim Keller_ 'The Laziest Person at Tesla' | 原文链接失效了?试试备份 | TAGs:company&job | Summary: Jim Keller is a semiconductor designer with an impressive career spanning several decades and various companies, including DEC, AMD, Broadcom, PA Semi, Apple, Tesla, Intel, and now Tenstorrent. He is known for his work ethic and ability to tackle complex challenges in the field of high-performance computing, self-driving technology, and artificial intelligence. | |
|
_2025-04-23_21:06:00_ | 2025-04-23 21:06:00 | DPUaudit_ DPU-assisted Pull-based Architecture for Near-Zero Cost System Auditing _ IEEE Conference Publication _ IEEE Xplore | 原文链接失效了?试试备份 | TAGs:DPU 应用 | Summary: This text is about a research paper titled "DPUaudit: DPU-assisted Pull-based Architecture for Near-Zero Cost System Auditing," published in the 2025 IEEE International Symposium on High Performance Computer Architecture. The paper proposes a new hardware-based auditing framework called DPUaudit, which utilizes DPU to pull system events from the monitored host instead of using a log sender. This pull-based architecture eliminates the need for heavy software protection mechanisms, resulting in near-zero runtime overhead and efficient system auditing. The experimental results show that DPUaudit only slows down applications on the monitored host by an average of 2.1%, which is significantly less than existing approaches.本文是关于一篇题为“DPUaudit:用于近零成本系统审计的 DPU 辅助拉式架构”的研究论文,该论文发表在 2025 年 IEEE 高性能计算机体系结构国际研讨会上。该白皮书提出了一种称为 DPUaudit 的基于硬件的新审计框架,该框架利用 DPU 从受监控主机中提取系统事件,而不是使用日志发件人。这种基于拉取的架构消除了对繁重的软件保护机制的需求,从而实现了近乎零的运行时开销和高效的系统审计。实验结果表明,DPUaudit 仅使受监控主机上的应用程序平均减慢 2.1%,这明显低于现有方法。 | |
|
_2025-04-23_19:27:36_ | 2025-04-23 19:27:36 | 魅族内核团队 | 原文链接失效了?试试备份 | TAGs:操作系统 linux 博客 | Summary: This page is a technical sharing platform for the Meizu kernel team. It contains several posts discussing various Linux kernel-related topics. One post is about the evolution from spin-table to PSCI (Precise System-wide Idle Control) for Linux SMP (Symmetric Multiprocessing) start-up, highlighting the necessity of this progression due to the limitations and hardware coupling of the spin-table mechanism. Another post discusses the irq_work generic interrupt callback mechanism and its first version features. A third post explores the RTG (Related Thread Group) feature in the WALT (Workload Adaptive Realtime Scheduler) of Qualcomm. The last post is about submitting patches to the Linux kernel community, with instructions on installing git and git-send-email for submitting patches. There are also posts about analyzing a deadlock issue, the cgroup freezer subsystem, and the Page Table Check mechanism for improving Linux kernel memory stability.本页面是魅族内核团队的技术分享平台。它包含几篇讨论各种 Linux 内核相关主题的文章。其中一篇博文是关于从 spin-table 到 PSCI (Precise System-wide Idle Control) 的演变,以启动 Linux SMP (对称多处理) ,强调了由于旋转表机制的限制和硬件耦合,这种进展的必要性。另一篇文章讨论了 irq_work 通用中断回调机制及其第一个版本的功能。第三篇博文探讨了 Qualcomm 的 WALT(工作负载自适应实时调度程序)中的 RTG(相关线程组)功能。最后一篇文章是关于向 Linux 内核社区提交补丁的,并附有安装 git 和 git-send-email 以提交补丁的说明。此外,还有一些关于分析死锁问题、cgroup 冻结子系统和用于提高 Linux 内核内存稳定性的页表检查机制的文章。 | |
|
|
_2025-04-22_14:43:19_ | 2025-04-22 14:43:19 | RISC-V嵌套虚拟化支持 - 允许Hypervisor上运行Hypervisor_哔哩哔哩_bilibili | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 | Summary: This page discusses the support for RISC-V nested virtualization, which enables the running of a hypervisor on another hypervisor, referred to as the "nested hypervisor," at the guest level on the first hypervisor. This architecture allows for increased security and efficiency in virtualized systems.本页讨论了对 RISC-V 嵌套虚拟化的支持,它允许在第一个虚拟机管理程序的来宾级别在另一个虚拟机管理程序(称为“嵌套虚拟机管理程序”)上运行虚拟机管理程序。此体系结构可以提高虚拟化系统的安全性和效率。 | |
_2025-04-21_15:39:23_ | 2025-04-21 15:39:23 | 图解|透明大页原理与实现-轻识 | 原文链接失效了?试试备份 | TAGs:内存 HugePage | Summary: The article discusses the benefits of using Transparent Huge Pages (THP) in Linux systems, which is an alternative to the standard large pages. THP simplifies the process of using large pages by automatically merging contiguous virtual memory addresses larger than 2MB into a single large page, reducing the need for TLB lookups, page table entries, and page faults. The core idea of THP is to continuously scan a process's virtual memory space and merge physical memory into a large page if the condition is met. The article also explains the logic steps of how THP works and how it differs from standard large pages.本文讨论了在 Linux 系统中使用透明大页面 (THP) 的好处,它是标准大页面的替代方案。THP 通过自动将大于 2MB 的连续虚拟内存地址合并到单个大页面中,减少了对 TLB 查找、页表条目和页面错误的需求,从而简化了使用大页面的过程。THP 的核心思想是持续扫描进程的虚拟内存空间,并在满足条件时将物理内存合并为一个大页面。本文还解释了 THP 如何工作的逻辑步骤以及它与标准大页面的不同之处。 | |
_2025-04-20_15:30:19_ | 2025-04-20 15:30:19 | 6.S081 _ Fall 2021 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 操作系统 | Summary: This text provides a schedule for the MIT 6.S081: Operating System Engineering course, including lecture topics, preparation assignments, and homework due dates. The course covers various aspects of operating systems, such as system calls, page tables, scheduling, file systems, and virtual memory. Students are expected to attend lectures, read assigned materials, and complete homework assignments. The schedule also includes some holidays and breaks throughout the semester.本文提供了 MIT 6.S081:作系统工程课程的时间表,包括讲座主题、准备作业和家庭作业截止日期。该课程涵盖作系统的各个方面,例如系统调用、页表、调度、文件系统和虚拟内存。学生需要参加讲座、阅读指定的材料并完成家庭作业。该时间表还包括整个学期的一些假期和休息时间。 | |
_2025-04-20_15:22:48_ | 2025-04-20 15:22:48 | MIT 6.S081_ Operating System Engineering - CS自学指南 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 操作系统 | Summary: This is an introduction to MIT 6.S081: Operating System Engineering, a course offered at the Massachusetts Institute of Technology (MIT). The course is taught by professors who developed the operating system JOS and have now created a new one called xv6 based on RISC-V. The course requires a solid foundation in system architecture, C language, and RISC-V assembly language. The course material is primarily in C and RISC-V, and its difficulty level is rated as five stars. The estimated study time is 150 hours. | |
_2025-04-20_15:09:58_ | 2025-04-20 15:09:58 | 6.S081——补充材料——RISC-V架构中的异常与中断详解_risc-v 中断设计-CSDN博客 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 中断 | Summary: This blog post discusses the concept of exceptions and interrupts in the RISC-V architecture. It explains that exceptions are defined as unexpected situations that occur during instruction execution, while interrupts are caused by external asynchronous events. The post also covers the role of the Supervisor and Machine modes in handling exceptions and the use of the CSR registers in the exception handling process. The article also touches upon the concept of interrupt delegation and the difference between direct and vectorized exception handling. The post is intended for readers who are interested in the RISC-V architecture and its exception and interrupt handling mechanisms. | |
|
|
_2025-04-18_19:26:06_ | 2025-04-18 19:26:06 | The RISC-V Instruction Set Manual_ Volume II_ Privileged Architecture | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA | Summary: This document describes the RISC-V privileged architecture, which covers aspects of RISC-V systems beyond the unprivileged ISA. It includes privileged instructions and additional functionality required for running operating systems and attaching external devices. The document includes the terminology used for different software stack components, the concept of privilege levels, and the use of control and status registers (CSRs). The RISC-V architecture supports three privilege levels: Machine, Supervisor, and User. The Machine level has the highest privileges and is the only mandatory privilege level for a RISC-V hardware platform. The Supervisor level is used for operating systems and other privileged software, while the User level is used for applications. The document also discusses debug mode and control and status registers (CSRs), including their address mapping conventions and a CSR listing. The CSR address space is divided into unprivileged and user-level CSRs, supervisor-level CSRs, hypervisor and virtual supervisor CSRs, and machine-level CSRs. The document also mentions the Zicsr extension, which is required for all RISC-V implementations, and the SYSTEM major opcode used for all privileged instructions.本文档介绍了 RISC-V 特权体系结构,它涵盖了非特权 ISA 之外的 RISC-V 系统的各个方面。它包括运行作系统和连接外部设备所需的特权指令和附加功能。本文档包括用于不同软件堆栈组件的术语、权限级别的概念以及控制和状态寄存器 (CSR) 的使用。RISC-V 架构支持三个权限级别:Machine、Supervisor 和 User。Machine 级别具有最高权限,并且是 RISC-V 硬件平台的唯一强制权限级别。Supervisor 级别用于作系统和其他特权软件,而 User 级别用于应用程序。本文档还讨论了调试模式以及控制和状态寄存器 (CSR),包括它们的地址映射约定和 CSR 列表。CSR 地址空间分为非特权和用户级 CSR、主管级 CSR、虚拟机管理程序和虚拟主管 CSR 以及计算机级 CSR。该文档还提到了 Zicsr 扩展,这是所有 RISC-V 实现所必需的,以及用于所有特权指令的 SYSTEM 主要作码。 | |
_2025-04-18_19:20:44_ | 2025-04-18 19:20:44 | RISC-V on the Performance Top _ Performance Blog | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 性能 | Summary: This text consists of several blog posts by Fei Wu discussing various topics related to RISC-V, including its performance, vector extensions on Valgrind, the importance of frame pointers, implicit type conversions causing panics, Git bisect for debugging, RISC-V interrupt handling, challenges and advantages of RISC-V, and RISC-V syscall performance regression. The posts also mention testing results and commands used for analysis.本文由 Fei Wu 的几篇博客文章组成,讨论了与 RISC-V 相关的各种主题,包括其性能、Valgrind 上的向量扩展、帧指针的重要性、导致 panic 的隐式类型转换、用于调试的 Git bisect、RISC-V 中断处理、RISC-V 的挑战和优势以及 RISC-V 系统调用性能回归。这些帖子还提到了用于分析的测试结果和命令。 | |
|
|
_2025-04-17_20:45:08_ | 2025-04-17 20:45:08 | 使用kvmtool运行和调试Linux内核 - 知乎 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 调试 | Summary: This text is about using the tool kvmtool to run and debug Linux kernels. The author compares kvmtool to QEMU and explains why they chose kvmtool for learning the core principles of KVM virtualization technology. They discuss the environment preparation, getting and compiling the source code, and using kvmtool to start a virtual machine. The text also covers common kvmtool commands and using GDB for debugging.本文是关于使用 kvmtool 工具运行和调试 Linux 内核的。作者将 kvmtool 与 QEMU 进行了比较,并解释了他们为什么选择 kvmtool 来学习 KVM 虚拟化技术的核心原理。他们讨论了环境准备、获取和编译源代码以及使用 kvmtool 启动虚拟机。该文本还涵盖了常见的 kvmtool 命令和使用 GDB 进行调试。 | |
_2025-04-17_15:18:01_ | 2025-04-17 15:18:01 | 一篇搞懂KSM机制剖析 — Linux内核中的内存去耦合 - 知乎 | 原文链接失效了?试试备份 | TAGs:操作系统 linux 内存 ksm | Summary: This article explains the concept and implementation of KSM (Kernel Samepage Merging) in the Linux kernel, a feature that allows a system manager (hypervisor) to merge identical memory pages and increase the number of parallel virtual machines. The article also discusses the background of KSM, its benefits, and ways to manage it. The article also mentions the history of server virtualization and the advantages of memory sharing in this context. The article concludes by discussing the importance of KSM in reducing memory usage and increasing the capacity of a server to host multiple applications or virtual machines. The article also recommends some resources for further learning.本文介绍了 Linux 内核中 KSM (Kernel Samepage Merging) 的概念和实现,该功能允许系统管理器 (hypervisor) 合并相同的内存页面并增加并行虚拟机的数量。本文还讨论了 KSM 的背景、优势以及管理它的方法。本文还提到了服务器虚拟化的历史以及在此上下文中内存共享的优势。本文最后讨论了 KSM 在减少内存使用和增加服务器托管多个应用程序或虚拟机的能力方面的重要性。本文还推荐了一些资源以供进一步学习。 | |
_2025-04-14_15:51:22_ | 2025-04-14 15:51:22 | RISC-V架构下外设虚拟化解决方案 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 IO | Summary: RISC-V architecture introduced IOMMU to address DMA transfer performance issues in virtual machines. IOMMU provides GPA to SPA address translation ability for each DMA device through a device table. With IOMMU, the DMA data transfer process can be automatically handled by the hardware, reducing the need for hypervisor OS to capture every DMA transfer. Additionally, IOMMU allows CPU and DMA to share the same process table, enabling VUs user processes to use DMA directly. For DMA devices with IOVA to GPA remapping, such as GPUs, IOMMU's process table can be used for automatic IOVA to GPA to SPA address translation. RISC-V's IOMMU supports PCIe's ATS and PRI interfaces, allowing for optimized MSI address translation for PCIe devices. (图1 - IOMMU下的两级地址翻译) | |
|
|
_2025-04-11_14:07:01_ | 2025-04-11 14:07:01 | riscv_ KVM_ Remove unnecessary vcpu kick - kernel_git_riscv_linux.git - RISC-V Linux kernel tree | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 | Summary: The RISC-V Linux kernel tree had a commit on February 21, 2025, by Bill Xiang, which removed an unnecessary vCPU kick after writing to the vs\_file in kvm\_riscv\_vcpu\_aia\_imsic\_inject. This change is applicable for vCPUs that are running and have their interrupts forwarded directly as an MSI. For vCPUs that are descheduled after emulating WFI, the guest external interrupt is enabled, causing the writing to the vs\_file to cause a guest external interrupt and wake up the vCPU in hgei\_interrupt to handle the interrupt properly. The commit was reviewed by Andrew Jones and Radim Krčmář and signed off by Anup Patel. The diff shows one deletion in arch/riscv/kvm/aia\_imsic.c.RISC-V Linux 内核树于 2025 年 2 月 21 日由 Bill Xiang 提交,该提交在写入 kvm\_riscv\_vcpu\_aia\_imsic\_inject 中的 vs\_file 后删除了不必要的 vCPU 踢出。此更改适用于正在运行且其中断作为 MSI 直接转发的 vCPU。对于在模拟 WFI 后取消调度的 vCPU,将启用客户机外部中断,从而导致对 vs\_file 的写入导致客户机外部中断,并唤醒 hgei\_interrupt 中的 vCPU 以正确处理中断。该提交由 Andrew Jones 和 Radim Krčmář 审查,并由 Anup Patel 签署。差异显示 arch/riscv/kvm/aia\_imsic.c 中的一个删除。 | |
_2025-04-11_11:28:55_ | 2025-04-11 11:28:55 | RISC-V 密码学指令扩展(K扩展)功能概述 - WuSiYu Blog | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA | Summary: This blog post is about RISC-V's cryptographic extension (K extension) for IT-related experiments. The K extension provides a series of cryptography-related instructions, which are similar to other instructions in terms of using general registers and maintaining the principle of two reads and one write. Compared to software implementation, using these instructions can enhance the speed of cryptographic algorithms and reduce the size of applications. | |
_2025-04-11_10:27:44_ | 2025-04-11 10:27:44 | 一个让 Linus Torvalds _不明觉赞_ 的内核优化与修复历程 - 知乎 | 原文链接失效了?试试备份 | TAGs:操作系统 linux 内存 Folio | Summary: This article discusses a kernel optimization and fix issue related to the Xarray data structure in the Linux kernel. The issue was first noticed as a potential problem in a commit submitted to the Linux community in April 2023, but it was not until September 2023 that it was officially highlighted. The problem was that a certain race condition in the Xarray code could lead to data corruption, but due to the low reproduction probability and lack of effective debug information, it was difficult for the community to determine the root cause. | |
_2025-04-10_15:22:41_ | 2025-04-10 15:22:41 | tech-privileged@lists.riscv.org _ [RISC-V] [tech-crypto-ext] Read the seed CSR | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA zkr | Summary: This discussion revolves around the behavior of the seed CSR (Control and Status Register) in a cryptographic system. The seed CSR is designed to ensure that secret entropy words are not made available multiple times for security reasons. When reading the seed CSR, the system clears (wipes) the entropy contents and changes the state to WAIT, unless there is entropy immediately available for ES16. However, there is a discrepancy between the seed CSR specification and the privileged specification regarding the side effects of reads and writes. | |
_2025-04-09_11:50:40_ | 2025-04-09 11:50:40 | WWW Computer Architecture - Books | 原文链接失效了?试试备份 | TAGs:计算机体系结构 | Summary: This webpage, titled "WWW Computer Architecture Page," is designed by a group of individuals from various universities and is maintained by Derek Hower. The page provides information about computer architecture, including books, publications, and tools. It includes sections on virtual machines, computer architecture design, parallel computing, and computer arithmetic. The page also offers links to newsgroups, job postings, and organizations related to computer architecture.这个标题为“WWW Computer Architecture Page”的网页由来自不同大学的一群人设计,并由 Derek Hower 维护。该页提供有关计算机体系结构的信息,包括书籍、出版物和工具。它包括有关虚拟机、计算机体系结构设计、并行计算和计算机算术的部分。该页面还提供指向与计算机体系结构相关的新闻组、招聘启事和组织的链接。 | |
_2025-04-09_11:38:32_ | 2025-04-09 11:38:32 | SMMU跟TrustZone啥关系? - 极术社区 - 连接开发者与智能计算生态 | 原文链接失效了?试试备份 | TAGs:处理器 安全 | Summary: This text discusses the relationship between SMMU (System Memory Management Unit) and TrustZone in the context of securing access to memory for various masters in a system. TrustZone is a security mechanism that partitions system resources into secure and non-secure parts, and SMMU is a System IP that allows other masters to use memory with a similar structure to the CPU's MMU. By adding SMMU, other masters can have MMU functionality, which includes address translation, memory protection, and isolation. This allows for more secure access to memory and better control over what each master can access. The text also mentions that SMMUv1, SMMUv2, and SMMUv3 have different architectures, programming methods, and hardware implementations but serve similar purposes.本文讨论了 SMMU (System Memory Management Unit) 和 TrustZone 在保护系统中各种主控对内存的访问的上下文中的关系。TrustZone 是一种安全机制,将系统资源划分为安全和不安全部分,而 SMMU 是一个系统 IP,允许其他主控使用结构与 CPU 的 MMU 类似的内存。通过添加 SMMU,其他主控可以具有 MMU 功能,包括地址转换、内存保护和隔离。这允许更安全地访问内存,并更好地控制每个主控可以访问的内容。正文还提到 SMMUv1、SMMUv2 和 SMMUv3 具有不同的体系结构、编程方法和硬件实现,但用途相似。 | |
_2025-04-09_11:38:10_ | 2025-04-09 11:38:10 | TrustZone是如何支持安全中断的? - 极术社区 - 连接开发者与智能计算生态 | 原文链接失效了?试试备份 | TAGs:处理器 安全 | Summary: TrustZone is a system-level security solution that can be implemented in SoC chips using a CPU that supports TrustZone and specific features such as secure address space filtering, secure timers, secure clocks, secure interrupts, key management, secure ROM code, secure debug, and secure SRAM. TrustZone allows easy management of peripherals, including access permissions, master control for secure and non-secure access, and secure interrupt generation and CPU response. This text discusses how to generate secure interrupts in TrustZone, involving secure peripherals, GIC, and the CPU. While GIC is often overlooked, understanding how TrustZone supports secure interrupts is crucial for resolving related issues. The CPU supports secure interrupts by checking if they are masked and determining where to process them. The processing of the interrupt depends on the EL level and exception handler. GIC supports secure interrupts by grouping them and securing related registers, allowing the CPU to configure them only when in a secure state. In GICv3, three groups are used: Group 0 for EL3, Secure Group 1 for S-EL1, and non-secure Group 1 for EL1. The CPU interface determines which interrupt to send based on the EL level and security status. However, FIQ does not represent a secure interrupt and is used differently depending on the group and CPU state. Peripherals can support secure interrupts as SGI, PPI, or SPI, with LPI only supporting non-secure interrupts.TrustZone 是一种系统级安全解决方案,可以使用支持 TrustZone 和特定功能(如安全地址空间过滤、安全定时器、安全时钟、安全中断、密钥管理、安全 ROM 代码、安全调试和安全 SRAM)的 CPU 在 SoC 芯片中实现。TrustZone 允许轻松管理外围设备,包括访问权限、用于安全和非安全访问的主控制以及安全中断生成和 CPU 响应。本文讨论了如何在 TrustZone 中生成安全中断,涉及安全外设、GIC 和 CPU。虽然 GIC 经常被忽视,但了解 TrustZone 如何支持安全中断对于解决相关问题至关重要。CPU 通过检查安全中断是否被屏蔽并确定处理它们的位置来支持安全中断。中断的处理取决于 EL 级别和异常处理程序。GIC 通过对安全中断进行分组并保护相关 registers 来支持安全中断,允许 CPU 仅在处于安全状态时对其进行配置。在 GICv3 中,使用了三个组:组 0 用于 EL3,安全组 1 用于 S-EL1,非安全组 1 用于 EL1。CPU 接口根据 EL 级别和安全状态确定要发送的中断。但是,FIQ 不代表安全中断,并且根据组和 CPU 状态的不同而有不同的使用方式。外设可以支持 SGI、PPI 或 SPI 等安全中断,而 LPI 仅支持非安全中断。 | |
_2025-04-08_18:05:19_ | 2025-04-08 18:05:19 | 我的处理器之路-----更新 - CPU设计杂谈 - EETOP 创芯网论坛 | 原文链接失效了?试试备份 | TAGs:处理器 | Summary: A user named Romer shares their experience of designing a four-core processor after being inspired by a course on computer architecture. They started by deciding on cache sizes, network FIFO sizes, and branch predictor parameters using the gem5 simulator. They chose a directory consistency protocol for the cache coherence protocol and improved upon a protocol from a Carnegie Mellon University course. They then designed each module's specific implementation and began coding. However, they encountered issues when integrating the entire system. Despite pausing the project to focus on school, they were eventually offered a project opportunity and continued their studies in computer architecture. They also learned about the importance of understanding operating systems for processor design.一位名叫 Romer 的用户分享了他们在受到计算机体系结构课程的启发后设计四核处理器的经验。他们首先使用 gem5 模拟器确定缓存大小、网络 FIFO 大小和分支预测器参数。他们为缓存一致性协议选择了目录一致性协议,并在卡内基梅隆大学课程中的协议进行了改进。然后,他们设计了每个模块的特定实现并开始编码。但是,他们在集成整个系统时遇到了问题。尽管暂停了项目以专注于学校,但他们最终获得了一个项目机会并继续学习计算机体系结构。他们还了解了了解作系统对处理器设计的重要性。 | |
|
|
_2025-04-03_17:31:37_ | 2025-04-03 17:31:37 | 环形振荡器与CPU硬件随机数解析- DeepSeek - 探索未至之境 | 原文链接失效了?试试备份 | TAGs:处理器 | Summary: The input discusses the relationship between ring oscillators and CPU hardware random number generation. A ring oscillator is a circuit made up of an odd number of flip-flops connected in a loop, which generates oscillations with a frequency dependent on the number of flip-flops and their delays. It's simple in structure but can be less stable due to environmental factors. CPU hardware random number generators typically rely on physical phenomena for randomness, such as electronic noise or clock jitter. Ring oscillators can be used as a source of this randomness due to the unpredictable delay variations in each flip-flop, which can be collected to generate random numbers. However, the user may be interested in how ring oscillators are integrated into CPUs as hardware random number generators. Possible methods include Intel's RdRand instruction, but the specifics are unclear. Research is needed to confirm common implementation techniques. Additionally, hardware random number generators often require post-processing, such as using hash or encryption algorithms to ensure uniform distribution and eliminate bias. The input also discusses the advantages and disadvantages of ring oscillator-based methods compared to other random number generation techniques, such as those based on avalanche diodes or quantum effects. Ring oscillators provide a low-cost, easily integratable randomness source but may require more calibration and post-processing. They also need to meet security requirements, such as resistance to attacks and unpredictability. The input is intended for computer science or electronics engineering students or developers interested in hardware security. The explanation should cover both theoretical principles and practical applications, providing examples of CPU implementations, such as Intel or AMD processors. The explanation should be clear, logical, and structured, starting with ring oscillators, then discussing hardware random number generators, and finally discussing their advantages, disadvantages, and applications. Technical details, such as the number of flip-flops or the method of quantifying the entropy source, should also be checked for accuracy. Ring oscillators and CPU hardware random number generators are closely related concepts in computer hardware design. The following is a detailed analysis of both concepts and their relationship. | |
_2025-04-03_16:56:54_ | 2025-04-03 16:56:54 | 从gem5到ASIP,如何打造一款自己的交换芯片模拟器? | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 处理器仿真 | Summary: This article discusses the development of a custom chip simulator for network switches using FPGA technology. The author, who is involved in network switch FPGA development and sharing, explains the importance of having a chip simulator before starting the actual code design. The article discusses the challenges in designing a chip simulator, particularly in handling the serial execution of software languages for parallel chip behavior. The author also mentions the importance of accurately simulating chip behavior while maintaining efficiency. The article mentions the Gem5 simulator as an example of a successful chip simulator and discusses its modular and discrete event-driven architecture. The author also mentions the importance of understanding events and event-driven systems in chip simulation. The article concles by discussing the benefits of using an event-driven chip simulator and the potential for creating a custom chip simulator for network switches using FPGAs.本文讨论了使用 FPGA 技术为网络交换机开发定制芯片仿真器。参与网络交换机 FPGA 开发和共享的作者解释了在开始实际代码设计之前拥有芯片仿真器的重要性。本文讨论了设计芯片仿真器的挑战,特别是在处理并行芯片行为的软件语言的串行执行方面。作者还提到了在保持效率的同时准确仿真芯片行为的重要性。本文将 Gem5 仿真器作为一个成功的芯片仿真器示例,并讨论了其模块化和离散的事件驱动架构。作者还提到了在芯片仿真中理解事件和事件驱动系统的重要性。本文最后讨论了使用事件驱动芯片仿真器的好处,以及使用 FPGA 为网络交换机创建自定义芯片仿真器的潜力。 | |
_2025-04-03_14:38:50_ | 2025-04-03 14:38:50 | Other Installation Methods - Rust Forge | 原文链接失效了?试试备份 | TAGs:语言 rust Install | Summary: The text describes different methods for installing Rust, with the recommended way being through rustup, a tool that manages Rust toolchains consistently across platforms. However, there are other ways to install Rust, such as offline installation, using a system package manager, or downloading standalone installers. The text also mentions the availability of source code for those who want to build the Rust toolchain from scratch.本文描述了安装 Rust 的不同方法,推荐的方法是通过 rustup,这是一个跨平台一致地管理 Rust 工具链的工具。但是,还有其他方法可以安装 Rust,例如离线安装、使用系统包管理器或下载独立安装程序。文本还提到了源代码的可用性,供那些想要从头开始构建 Rust 工具链的人使用。 | |
_2025-04-03_10:20:36_ | 2025-04-03 10:20:36 | 中国科学院软件研究所团队推动 Cloud Hypervisor 官方支持 RISC-V | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 | Summary: The Chinese Academy of Sciences Software Research Institute team has officially released Cloud-Hypervisor v45.0, which adds experimental RISC-V support. This makes Cloud-Hypervisor the first lightweight virtualization solution to integrate with Kata-containers and fully support RISC-V. The update received attention in the overseas open-source community, with Phoronix reporting on its significance as the first step for RISC-V in the server virtualization domain. Cloud-Hypervisor, built using the Rust programming language, aims to create a complete Rust virtualization software ecosystem for future RISC-V chips. As the bridge connecting the KVM virtualization engine with upper-layer applications, Cloud-Hypervisor is a crucial implementation in the RISC-V virtualization software landscape. It provides a runtime environment for Kubernetes and other container orchestration systems with virtual machine-level isolation, enhancing security while implementing flat fault/performance isolation within clusters. Cloud-Hypervisor, a modern, lightweight, and cross-platform virtualization monitoring program, has been developed over five years and has contributed significantly to the RISC-V community, ranking ninth in total contributions and first in RISC-V contributions globally. To achieve Cloud Hypervisor's RISC-V support, the team focused on three core areas: 1) virtualization core capabilities, 2) engineering system upgrades, and 3) production-level stability assurance. These efforts have led to the initial support of hypervisor, arch, vm-allocator, devices, and vmm modules on the RISC-V architecture. The team plans to further enhance Cloud-Hypervisor's RISC-V architecture support by addressing the feature differences between RISC-V, x86, and ARM, completing FDT generation links, adding UEFI boot support, and supporting PMU, IOMMU, and TPM devices. OpenEuler, as the first verification platform for RISC-V virtualization capabilities, will continue to support Cloud-Hypervisor on the openEuler platform and integrate it with the Kata Containers secure container technology path, creating a secure container infrastructure based on openEuler RISC-V. The release of Cloud-Hypervisor v45.0 marks a significant milestone in the RISC-V virtualization roadmap. By implementing systemic breakthroughs in instruction register operations, AIA interrupt controller integration, and memory management, the team is building a virtualization ability matrix that conforms to the RVA23 specification. This achievement not only provides a verified RISC-V virtualization implementation baseline for the open-source community but also lays the foundation for the standardized evolution of future RISC-V virtualization-related software, enabling RISC-V server ecosystems to possess software validation capabilities from chip features to container runtimes even before the hardware platform matures.中国科学院软件研究院团队正式发布 Cloud-Hypervisor v45.0,增加了实验性的 RISC-V 支持。这使得 Cloud-Hypervisor 成为第一个与 Kata 容器集成并完全支持 RISC-V 的轻量级虚拟化解决方案。该更新受到了海外开源社区的关注,Phoronix 报告了其作为 RISC-V 在服务器虚拟化领域的第一步的重要性。Cloud-Hypervisor 使用 Rust 编程语言构建,旨在为未来的 RISC-V 芯片创建一个完整的 Rust 虚拟化软件生态系统。作为连接 KVM 虚拟化引擎与上层应用程序的桥梁,Cloud-Hypervisor 是 RISC-V 虚拟化软件领域中的关键实现。它为 Kubernetes 和其他容器编排系统提供了一个具有虚拟机级隔离的运行时环境,在增强安全性的同时在集群内实施平面故障/性能隔离。Cloud-Hypervisor 是一个现代、轻量级和跨平台的虚拟化监控程序,已经开发了五年多,为 RISC-V 社区做出了重大贡献,在全球总贡献中排名第九,在 RISC-V 贡献中排名第一。为了实现 Cloud Hypervisor 的 RISC-V 支持,该团队专注于三个核心领域:1) 虚拟化核心能力,2) 工程系统升级,以及 3) 生产级稳定性保证。这些努力导致了对 RISC-V 架构上的 hypervisor、arch、vm-allocator、devices 和 vmm 模块的初步支持。 该团队计划通过解决 RISC-V、x86 和 ARM 之间的功能差异,完成 FDT 生成链接,添加 UEFI 启动支持,并支持 PMU、IOMMU 和 TPM 设备,进一步增强 Cloud-Hypervisor 的 RISC-V 架构支持。OpenEuler 作为首个 RISC-V 虚拟化能力的验证平台,将继续在 openEuler 平台上支持 Cloud-Hypervisor,并与 Kata Containers 安全容器技术路径集成,打造基于 openEuler RISC-V 的安全容器基础设施。Cloud-Hypervisor v45.0 的发布标志着 RISC-V 虚拟化路线图中的一个重要里程碑。通过在指令寄存器作、AIA 中断控制器集成和内存管理方面实现系统性突破,该团队正在构建符合 RVA23 规范的虚拟化能力矩阵。这一成果不仅为开源社区提供了经过验证的 RISC-V 虚拟化实现基线,也为未来 RISC-V 虚拟化相关软件的标准化演进奠定了基础,使 RISC-V 服务器生态系统在硬件平台成熟之前就拥有从芯片特性到容器运行时的软件验证能力。 | |
|
_2025-04-02_11:47:19_ | 2025-04-02 11:47:19 | Semidynamics - About us | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 企业 | Summary: Semidynamics is a European RISC-V IP core provider based in Barcelona, specializing in high-bandwidth, high-performance, vector unit IP cores for machine learning and AI applications. The company, founded in 2016, has a team of experienced hardware and software engineers, and offers services from specification to design and validation. They also provide employee development services, both on-site and remotely. The team includes executives Roger, Pedro, Silvana, Bruno, Laura, Marc, Volker, Clara, Federico, Deepak, Usman, Todor, Jordi, Shyamkumar, Karel, Francesco, Chetan, Kevin, Muhammad, Joan, Ismael, Arnau, Stefano, Aitor, Àlex, Ian, Shreeharsha, Hector, Florencia, Martí, Branimir, Jaume, Zeeshan, José, Pia, Enric, and many others. Semidynamics is a member of RISC-V International, a global non-profit organization, and collaborates with various institutions and grants, including the European Processor Initiative (EPI) and MontBlanc 2020 project. They are also developing a RISC-V cloud server architecture, called RISER, and a cloud service, Vitamin-V, based on open-source RISC-V technology. Semidynamics offers customized high-bandwidth RISC-V IP cores for your next project. Contact them for more information.Semidynamics 是一家总部位于巴塞罗那的欧洲 RISC-V IP 核提供商,专门为机器学习和 AI 应用提供高带宽、高性能的矢量单元 IP 核。该公司成立于 2016 年,拥有一支经验丰富的硬件和软件工程师团队,提供从规范到设计和验证的服务。他们还提供现场和远程员工发展服务。该团队包括高管 Roger、Pedro、Silvana、Bruno、Laura、Marc、Volker、Clara、Federico、Deepak、Usman、Todor、Jordi、Shyamkumar、Karel、Francesco、Chetan、Kevin、Muhammad、Joan、Ismael、Arnau、Stefano、Aitor、Àlex、Ian、Shreeharsha、Hector、Florencia、Martí、Branimir、Jaume、Zeeshan、José、Pia、Enric 等。Semidynamics 是全球非营利组织 RISC-V International 的成员,并与各种机构和赠款合作,包括欧洲处理器倡议 (EPI) 和 MontBlanc 2020 项目。他们还在开发一种名为 RISER 的 RISC-V 云服务器架构,以及一种基于开源 RISC-V 技术的云服务 Vitamin-V。Semidynamics 为您的下一个项目提供定制的高带宽 RISC-V IP 内核。请联系他们以获取更多信息。 | |
|
_2025-04-01_10:39:15_ | 2025-04-01 10:39:15 | 开发内功修炼@张彦飞 - 分享我的技术日常思考,和大伙儿一起共同成长! | 原文链接失效了?试试备份 | TAGs:博客_论坛 个人 | Summary: This page is from the blog "开发内功修炼" by 张�enfei, discussing various topics related to Linux and memory management. In one post, the author expresses concern about the Linux kernel using less memory than expected and wonders how it detects available physical memory addresses. In another post, the author explains the benefits of using HugePages for Oracle databases. The author also shares his experiences on time and energy management while writing a book. The page includes tags for "内存篇" (memory), "服务器" (server), and "技术面试" (technical interview), among others.此页面来自 张恩飞 的博客 “开发内功修炼”,讨论了与 Linux 和内存管理相关的各种主题。在一篇文章中,作者对 Linux 内核使用的内存少于预期表示担忧,并想知道它如何检测可用的物理内存地址。在另一篇文章中,作者解释了将 HugePages 用于 Oracle 数据库的好处。作者还分享了他在写书时在时间和能源管理方面的经验。该页面包括 “内存篇” (memory)、“服务器” (server) 和 “技术面试” (技术面试) 等标签。 | |
_2025-03-31_21:55:14_ | 2025-03-31 21:55:14 | 虚拟机中的 HLT 优化_ kvm-poll-control - 知乎 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 kvm cpu idle | Summary: The Linux kernel function `wait_for_random_bytes` is used to ensure that the random number generator (RNG) in the Linux kernel is ready and has sufficient entropy before continuing execution in situations where secure random numbers are required, such as encryption operations or key generation. The function blocks the current thread until the RNG has completed initialization and has an adequate amount of entropy. It is particularly important during system startup or when the entropy pool has not yet accumulated sufficient random data. By using `wait_for_random_bytes`, developers can prevent the generation of weak random numbers, which could lead to security vulnerabilities. The function works by checking if the RNG is ready using the `crng_ready()` function. If the RNG is not yet initialized, the function blocks the current thread and adds it to the `crng_init_wait` queue, waiting until the RNG is ready to continue execution. The function can be used in various contexts, such as in driver initialization or in generating secure tokens. It is essential to note that the function should only be used in contexts where sleeping is allowed, as it may call functions that cause the system to sleep, such as `wait_event`. Additionally, the function can have performance implications, especially during system startup when the entropy pool may take a long time to initialize. Alternative solutions include using non-blocking functions or actively adding entropy sources to the pool. Related functions include `get_random_bytes`, `add_hwgenerator_randomness`, and `urandom_read`.Linux 内核函数 'wait_for_random_bytes' 用于确保 Linux 内核中的随机数生成器 (RNG) 已准备就绪并具有足够的熵,然后才能在需要安全随机数的情况下继续执行,例如加密作或密钥生成。该函数会阻止当前线程,直到 RNG 完成初始化并具有足够的熵量。在系统启动期间或熵池尚未积累足够的随机数据时,这一点尤为重要。通过使用 'wait_for_random_bytes',开发人员可以防止产生弱随机数,这可能会导致安全漏洞。该函数通过使用 'crng_ready()' 函数检查 RNG 是否准备就绪来工作。如果 RNG 尚未初始化,该函数会阻止当前线程并将其添加到 'crng_init_wait' 队列中,等待 RNG 准备好继续执行。该函数可用于各种上下文,例如驱动程序初始化或生成安全令牌。需要注意的是,该函数只应在允许休眠的上下文中使用,因为它可能会调用导致系统休眠的函数,例如 'wait_event'。此外,该函数可能会对性能产生影响,尤其是在系统启动期间,此时熵池可能需要很长时间才能初始化。替代解决方案包括使用非阻塞函数或主动向池中添加熵源。相关函数包括 'get_random_bytes'、'add_hwgenerator_randomness' 和 'urandom_read'。 | |
_2025-03-31_14:35:51_ | 2025-03-31 14:35:51 | 虚拟机中的 HLT 优化_ kvm-poll-control - 知乎 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 kvm cpu idle | Summary: This text is about the implementation and functionality of KVM's halt poll control feature, which is a paravirtualization (PV) technique used in the Linux kernel to optimize virtualization by transferring the polling (round-robin checking) phase of HLT (halt processor) instructions from the host to the guest. This can help reduce some HLT-induced VM-exits. The text also discusses the background and working mechanism of this feature by referring to the Linux kernel code. | |
_2025-03-30_21:47:41_ | 2025-03-30 21:47:41 | Linux内核函数wait_for_random_bytes详解 - DeepSeek - 探索未至之境 | 原文链接失效了?试试备份 | TAGs:操作系统 linux random | Summary: The Linux kernel function `wait_for_random_bytes` is used to ensure that the random number generator (RNG) in the Linux kernel is ready and has sufficient entropy before continuing execution in situations where secure random numbers are required, such as encryption operations or key generation. The function blocks the current thread until the RNG has completed initialization and has an adequate amount of entropy. It is particularly important during system startup or when the entropy pool has not yet accumulated sufficient random data. By using `wait_for_random_bytes`, developers can prevent the generation of weak random numbers, which could lead to security vulnerabilities. The function works by checking if the RNG is ready using the `crng_ready()` function. If the RNG is not yet initialized, the function blocks the current thread and adds it to the `crng_init_wait` queue, waiting until the RNG is ready to continue execution. The function can be used in various contexts, such as in driver initialization or in generating secure tokens. It is essential to note that the function should only be used in contexts where sleeping is allowed, as it may call functions that cause the system to sleep, such as `wait_event`. Additionally, the function can have performance implications, especially during system startup when the entropy pool may take a long time to initialize. Alternative solutions include using non-blocking functions or actively adding entropy sources to the pool. Related functions include `get_random_bytes`, `add_hwgenerator_randomness`, and `urandom_read`.Linux 内核函数 'wait_for_random_bytes' 用于确保 Linux 内核中的随机数生成器 (RNG) 已准备就绪并具有足够的熵,然后才能在需要安全随机数的情况下继续执行,例如加密作或密钥生成。该函数会阻止当前线程,直到 RNG 完成初始化并具有足够的熵量。在系统启动期间或熵池尚未积累足够的随机数据时,这一点尤为重要。通过使用 'wait_for_random_bytes',开发人员可以防止产生弱随机数,这可能会导致安全漏洞。该函数通过使用 'crng_ready()' 函数检查 RNG 是否准备就绪来工作。如果 RNG 尚未初始化,该函数会阻止当前线程并将其添加到 'crng_init_wait' 队列中,等待 RNG 准备好继续执行。该函数可用于各种上下文,例如驱动程序初始化或生成安全令牌。需要注意的是,该函数只应在允许休眠的上下文中使用,因为它可能会调用导致系统休眠的函数,例如 'wait_event'。此外,该函数可能会对性能产生影响,尤其是在系统启动期间,此时熵池可能需要很长时间才能初始化。替代解决方案包括使用非阻塞函数或主动向池中添加熵源。相关函数包括 'get_random_bytes'、'add_hwgenerator_randomness' 和 'urandom_read'。 | |
_2025-03-30_14:11:55_ | 2025-03-30 14:11:55 | RISC-V SSTC扩展对Linux CRNG影响分析 - DeepSeek - 探索未至之境 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v timer SSTC random | Summary: The user is asking for an analysis of the impact of RISC-V's SSTC extension on Linux's CRNG (Cryptographic Random Number Generator) and the getrandom system call. The SSTC extension may refer to "Supervisor Software Time Compare" or some time-related extension, but it's not clear without checking the search results. However, the user's question seems to involve RISC-V's security vulnerabilities or random number generation mechanisms. For instance, one page mentions that RISC-V enabled the general CPU vulnerability reporting feature in Linux 6.12 kernel, including the detection and mitigation status. This might be related to system security, which in turn affects the reliability of the random number generator. CRNG is a random number generation module in the Linux kernel, while getrandom is a system call for obtaining random numbers in user space. If RISC-V's SSTC extension introduces new instructions or hardware features, it may affect the efficiency or security of random number generation. For example, one page discusses RISC-V Linux entry address memory optimization, which might involve kernel startup time memory management but is not directly related to random number generation. Another page mentions RISC-V processor vulnerabilities, such as GhostWrite, which can impact system security and, consequently, CRNG's data source. If SSTC extension improves time stamping or counters, it may provide higher-quality interrupt sources or entropy sources for CRNG, thereby improving random number generation speed and quality. One page discusses RISC-V's development in the software ecosystem, including toolchain and compiler optimization, which may indirectly affect the implementation of random number generation in the kernel. For instance, new extensions may require kernel module adaptation and optimization of the entropy collection process. Additionally, pages 7 and 8 discuss RISC-V's integration with the open-source community, which might involve kernel developers supporting RISC-V's new features. If SSTC extension requires kernel-level modifications, such as in driver or architecture-related code, this may impact CRNG's implementation. In summary, while the search results do not directly mention SSTC extension, it can be inferred that SSTC may be a security or time-related extension, and its impact may include improving entropy source collection efficiency, enhancing random number generation security, or requiring kernel adaptation. For example, if SSTC provides hardware random number generation instructions, the kernel's CRNG may utilize these instructions to improve performance and reduce reliance on software entropy sources. Furthermore, vulnerability patches, such as the one mentioned in page 2, can strengthen CRNG's security by addressing hardware vulnerabilities that could lead to random number prediction risks. RISC-V's SSTC (Supervisor Software Time Compare) extension is a part of RISC-V's privileged architecture, primarily used for optimizing time management and interrupt handling. Its impact on Linux's CRNG and getrandom can be summarized as follows: | |
_2025-03-30_13:46:50_ | 2025-03-30 13:46:50 | Linux系统启动时的getrandom随机数系统调用阻塞问题 | 原文链接失效了?试试备份 | TAGs:操作系统 linux random | Summary: This text discusses an issue with the getrandom system call in the Linux kernel, which can cause a delay during system startup due to the entropy pool not being fully initialized yet. The entropy pool is used to generate high-quality random numbers for various purposes, but it needs to be initialized before it can provide these numbers to applications. If the pool is not initialized, the getrandom() system call without the GRND\_NONBLOCK flag will block until the initialization is complete. This can cause applications that depend on the kernel's random number generator to also experience a delay during system startup. The text suggests enabling the CONFIG\_RANDOM\_TRUST\_CPU option during kernel compilation to trust the CPU's random number generator to seed the kernel's CRNG and avoid this delay. However, this option is not enabled by default on some distributions. The text also mentions various hardware and software sources of entropy and how they are used by the Linux kernel.本文讨论了 Linux 内核中 getrandom 系统调用的一个问题,由于熵池尚未完全初始化,这可能会导致系统启动期间出现延迟。熵池用于生成用于各种目的的高质量随机数,但需要先初始化,然后才能将这些数字提供给应用程序。如果池未初始化,则没有 GRND\_NONBLOCK 标志的 getrandom() 系统调用将阻塞,直到初始化完成。这可能会导致依赖内核随机数生成器的应用程序在系统启动期间也遇到延迟。文本建议在内核编译期间启用 CONFIG\_RANDOM\_TRUST\_CPU 选项,以信任 CPU 的随机数生成器为内核的 CRNG 提供种子并避免这种延迟。但是,默认情况下,某些分配上不启用此选项。该文本还提到了各种硬件和软件熵源,以及 Linux 内核如何使用它们。 | |
_2025-03-30_13:19:19_ | 2025-03-30 13:19:19 | Linux内核函数wait_for_random_bytes详解 - DeepSeek - 探索未至之境 | 原文链接失效了?试试备份 | TAGs:操作系统 linux random | Summary: The `wait_for_random_bytes` function in the Linux kernel is used to ensure that the random number generator (RNG) is ready and has sufficient entropy before generating secure random numbers. The function blocks the caller until the RNG is initialized and has enough entropy. | |
_2025-03-29_23:58:36_ | 2025-03-29 23:58:36 | Linux内核函数wait_for_random_bytes详解 - DeepSeek - 探索未至之境 | 原文链接失效了?试试备份 | TAGs:操作系统 linux random | Summary: The Linux kernel function `wait_for_random_bytes` is used to ensure that the random number generator (RNG) in the Linux kernel is ready and has sufficient entropy before continuing execution in situations where secure random numbers are required, such as encryption operations or key generation. The function blocks the current thread until the RNG has completed initialization and has an adequate amount of entropy. It is particularly important during system startup or when the entropy pool has not yet accumulated sufficient random data. By using `wait_for_random_bytes`, developers can prevent the generation of weak random numbers, which could lead to security vulnerabilities. The function works by checking if the RNG is ready using the `crng_ready()` function. If the RNG is not yet initialized, the function blocks the current thread and adds it to the `crng_init_wait` queue, waiting until the RNG is ready to continue execution. The function can be used in various contexts, such as in driver initialization or in generating secure tokens. It is essential to note that the function should only be used in contexts where sleeping is allowed, as it may call functions that cause the system to sleep, such as `wait_event`. Additionally, the function can have performance implications, especially during system startup when the entropy pool may take a long time to initialize. Alternative solutions include using non-blocking functions or actively adding entropy sources to the pool. Related functions include `get_random_bytes`, `add_hwgenerator_randomness`, and `urandom_read`.Linux 内核函数 'wait_for_random_bytes' 用于确保 Linux 内核中的随机数生成器 (RNG) 已准备就绪并具有足够的熵,然后才能在需要安全随机数的情况下继续执行,例如加密作或密钥生成。该函数会阻止当前线程,直到 RNG 完成初始化并具有足够的熵量。在系统启动期间或熵池尚未积累足够的随机数据时,这一点尤为重要。通过使用 'wait_for_random_bytes',开发人员可以防止产生弱随机数,这可能会导致安全漏洞。该函数通过使用 'crng_ready()' 函数检查 RNG 是否准备就绪来工作。如果 RNG 尚未初始化,该函数会阻止当前线程并将其添加到 'crng_init_wait' 队列中,等待 RNG 准备好继续执行。该函数可用于各种上下文,例如驱动程序初始化或生成安全令牌。需要注意的是,该函数只应在允许休眠的上下文中使用,因为它可能会调用导致系统休眠的函数,例如 'wait_event'。此外,该函数可能会对性能产生影响,尤其是在系统启动期间,此时熵池可能需要很长时间才能初始化。替代解决方案包括使用非阻塞函数或主动向池中添加熵源。相关函数包括 'get_random_bytes'、'add_hwgenerator_randomness' 和 'urandom_read'。 | |
|
_2025-03-28_17:30:12_ | 2025-03-28 17:30:12 | Linux跟踪进程系统调用方法 - DeepSeek - 探索未至之境 | 原文链接失效了?试试备份 | TAGs:操作系统 linux 系统调用 | Summary: In Linux, tracking system calls for processes is commonly done using the tool strace. Strace is a widely-used utility for monitoring and recording the interactions between a process and the Linux kernel. It provides detailed information about system calls, signals, and other process activities.在 Linux 中,跟踪系统对进程的调用通常使用工具 strace 来完成。Strace 是一种广泛使用的实用程序,用于监控和记录进程与 Linux 内核之间的交互。它提供有关系统调用、信号和其他进程活动的详细信息。 | |
_2025-03-28_10:29:49_ | 2025-03-28 10:29:49 | Google AI芯片TPU核心架构--脉动阵列Systolic Array - 知乎 | 原文链接失效了?试试备份 | TAGs:处理器 AI | Summary: Google's TPU (Tensor Processing Unit) is a specialized chip designed for artificial intelligence applications, with its core component being the "Matrix Multiply Unit." This unit uses a Systolic Array, a pulsing array, to enhance the speed and power efficiency of AI tasks, particularly in regards to convolution and matrix multiplication. The Systolic Array in TPU's architecture is the main focus, which is built around this matrix multiplication unit. It is accompanied by other data units like Unified Buffer and Weight FIFO, as well as activation pooling calculation units. | |
|
_2025-03-27_19:33:29_ | 2025-03-27 19:33:29 | 从Linux内核中获取真随机数 - 红字 - 博客园 | 原文链接失效了?试试备份 | TAGs:操作系统 linux | Summary: The Linux kernel has an implementation for a random number generator, which theoretically produces true random numbers. Unlike the pseudo-random numbers generated by the standard C library functions rand() and srand(), true random numbers are not predictable and cannot be replicated using the same seed value. However, time() returns a deterministic value, making it insufficient for generating unpredictable noise. For programs requiring true random numbers, an external noise source is necessary. Linux kernel found a perfect noise source generator: human interaction with the computer. Keyboard keystroke intervals, mouse movement distances and intervals, and specific interrupt times are all unpredictable for the computer. Although a computer's behavior is entirely programmable, human interaction with hardware introduces significant uncertainty. The kernel maintains a entropy pool, which contains completely random data from these unpredictable device events. When a new device event arrives, the kernel estimates the randomness of the new data and reduces the entropy pool's estimation when data is taken out.Linux 内核有一个随机数生成器的实现,理论上它会生成真正的随机数。与标准 C 库函数 rand() 和 srand() 生成的伪随机数不同,真随机数是不可预测的,并且不能使用相同的种子值进行复制。但是, time() 返回一个确定性值,使其不足以生成不可预测的噪声。对于需要真随机数的程序,外部噪声源是必需的。Linux 内核找到了一个完美的噪声源生成器:人类与计算机的交互。键盘击键间隔、鼠标移动距离和间隔以及特定的中断时间对于计算机来说都是不可预测的。尽管计算机的行为完全可编程,但人类与硬件的交互会带来很大的不确定性。内核维护一个熵池,其中包含来自这些不可预测的设备事件的完全随机数据。当新的设备事件到达时,内核会估计新数据的随机性,并在取出数据时减少熵池的估计。 | |
_2025-03-27_17:49:34_ | 2025-03-27 17:49:34 | [RESEND,v2,4_4] iommu_riscv_ Add support for Svnapot - Patchwork | 原文链接失效了?试试备份 | TAGs:处理器 risc-v IOMMU svnapot | Summary: This text is an email message containing a patch series for the RISC-V IOMMU driver in the Linux kernel. The patch series aims to add support for Svnapot, a specific page size, in the IOMMU driver. The patch series includes five individual patches, each addressing a specific aspect of the implementation. The first patch adds the Svnapot size as a supported page size and applies it when possible. The remaining patches handle the allocation and freeing of Svnapot-sized pages in the IOMMU driver. The email also includes various headers and metadata, such as the sender, recipients, subject, and date.此文本是一封电子邮件,其中包含 Linux 内核中 RISC-V IOMMU 驱动程序的补丁系列。该补丁系列旨在在 IOMMU 驱动程序中添加对 Svnapot(一种特定页面大小)的支持。补丁系列包括五个单独的补丁,每个补丁都涉及实施的特定方面。第一个补丁将 Svnapot 大小添加为支持的页面大小,并在可能的情况下应用它。其余补丁处理 IOMMU 驱动程序中 Svnapot 大小的页面的分配和释放。该电子邮件还包括各种标头和元数据,例如发件人、收件人、主题和日期。 | |
_2025-03-27_16:14:26_ | 2025-03-27 16:14:26 | KVM中断注入机制-CSDN博客 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 中断虚拟化 | Summary: This text is about the process of handling interrupts in the X86 platform's kernel on QEMU. It explains the three main parts of interrupt handling: routing the interrupt, delivering it to the Local APIC (LAPIC), and registering it in the virtual CPU (vCPU). The text also discusses the software simulation implementation and the interrupt flow.本文是关于 QEMU 上 X86 平台内核中处理中断的过程。它解释了中断处理的三个主要部分:路由中断、将其传送到本地 APIC (LAPIC) 以及在虚拟 CPU (vCPU) 中注册中断。本文还讨论了软件仿真实现和中断流。 | |
|
_2025-03-27_11:31:09_ | 2025-03-27 11:31:09 | riscv_ KVM_ Remove unnecessary vcpu kick - kernel_git_riscv_linux.git - RISC-V Linux kernel tree | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 中断 | Summary: The RISC-V Linux kernel tree had a commit on February 21, 2025, by Bill Xiang, which removed an unnecessary vCPU kick after writing to the vs\_file in kvm\_riscv\_vcpu\_aia\_imsic\_inject. This change is applicable for vCPUs that are running and have their interrupts forwarded directly as an MSI. For vCPUs that are descheduled after emulating WFI, the guest external interrupt is enabled, causing the writing to the vs\_file to cause a guest external interrupt and wake up the vCPU in hgei\_interrupt to handle the interrupt properly. The commit was reviewed by Andrew Jones and Radim Krčmář and signed off by Anup Patel. The diff shows one deletion in arch/riscv/kvm/aia\_imsic.c.RISC-V Linux 内核树于 2025 年 2 月 21 日由 Bill Xiang 提交,该提交在写入 kvm\_riscv\_vcpu\_aia\_imsic\_inject 中的 vs\_file 后删除了不必要的 vCPU 踢出。此更改适用于正在运行且其中断作为 MSI 直接转发的 vCPU。对于在模拟 WFI 后取消调度的 vCPU,将启用客户机外部中断,从而导致对 vs\_file 的写入导致客户机外部中断,并唤醒 hgei\_interrupt 中的 vCPU 以正确处理中断。该提交由 Andrew Jones 和 Radim Krčmář 审查,并由 Anup Patel 签署。差异显示 arch/riscv/kvm/aia\_imsic.c 中的一个删除。 | |
|
_2025-03-25_14:43:07_ | 2025-03-25 14:43:07 | riscv-profiles_src_rvb23-profile.adoc at main · riscv_riscv-profiles | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA | Summary: The provided text is a GitHub page about the RVB23 profile for RISC-V application processors. It outlines the mandatory and optional ISA features available to user-mode (RVB23U64) and supervisor-mode (RVB23S64) execution environments in 64-bit RVB applications processors. The page also mentions various extensions and options, some of which are localized, development, expansion, or transitory. The RVB23 profile is a customizable 64-bit application processor profile that provides a large set of features but allows optionality for more expensive and targeted extensions.提供的文本是有关 RISC-V 应用程序处理器的 RVB23 配置文件的 GitHub 页面。它概述了 64 位 RVB 应用处理器中的用户模式 (RVB23U64) 和监控器 模式 (RVB23S64) 执行环境可用的强制性和可选 ISA 功能。该页面还提到了各种扩展和选项,其中一些是本地化的、开发的、扩展的或临时的。RVB23 配置文件是一种可定制的 64 位应用处理器配置文件,它提供大量功能,但允许选择更昂贵和有针对性的扩展。 | |
_2025-03-25_10:58:25_ | 2025-03-25 10:58:25 | [1_5] RISC-V_ KVM_ Forward SEED CSR access to user space - Patchwork | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA zkr | Summary: This text appears to be an email message containing a patch series for the RISC-V KVM (Kernel-based Virtual Machine) project. The patch series is related to forwarding SEED CSR (Control and Status Register) access to user space when the Zkr extension is available to the guest/VM. The patch includes changes to the `arch/riscv/kvm/vcpu_insn.c` file and is signed off by Anup Patel and reviewed by Andrew Jones. The patch series also includes metadata such as the list ID, mailman version, and sender information.此文本似乎是一封电子邮件,其中包含 RISC-V KVM(基于内核的虚拟机)项目的补丁系列。此补丁系列与当 Zkr 扩展可供来宾/VM 使用时将 SEED CSR(控制和状态寄存器)访问转发到用户空间有关。此补丁包括对 'arch/riscv/kvm/vcpu_insn.c' 文件的更改,由 Anup Patel 签署并由 Andrew Jones 审阅。修补程序系列还包括元数据,例如列表 ID、mailman 版本和发件人信息。 | |
_2025-03-25_10:58:17_ | 2025-03-25 10:58:17 | [PULL,02_28] target_riscv_kvm_ Fix exposure of Zkr - Patchwork | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA zkr | Summary: This text is a diff output showing changes made to the RISC-V QEMU emulator code. The changes include adding a new function `riscv_new_csr_seed()` to create a new value for the SEED CSR, and updating the `rmw_seed()` function to use this new function instead of generating a random value directly. The changes also include adding a new case `KVM_EXIT_RISCV_CSR` to the `kvm_arch_handle_exit()` function to handle the CSR EXIT reason.此文本是一个差异输出,显示了对 RISC-V QEMU 仿真器代码所做的更改。这些更改包括添加新函数 'riscv_new_csr_seed()' 为 SEED CSR 创建新值,以及更新 'rmw_seed()' 函数以使用此新函数,而不是直接生成随机值。这些更改还包括向 'kvm_arch_handle_exit()' 函数添加新的 case 'KVM_EXIT_RISCV_CSR' 来处理 CSR EXIT 原因。 | |
|
_2025-03-24_20:05:05_ | 2025-03-24 20:05:05 | CXL Deep Dive – Future of Composable Server Architecture and Heterogeneous Compute, Products From 20 Firms, Overview of 3.0 St | 原文链接失效了?试试备份 | TAGs:处理器 CXL | Summary: This text is about the Compute Express Link (CXL) standard, which aims to enable heterogeneous compute and composable server architectures by establishing an industry-standard protocol for connecting various chips together. CXL builds upon the existing PCIe 5.0 infrastructure but adds coherency and low-latency memory transactions. The article discusses the importance of CXL for the datacenter industry and its potential impact on memory pooling and switching. CXL 1.1 supports CXL.io, CXL.cache, and CXL.mem, and versions 2.0 and 3.0 bring additional features like memory sharing and device-to-device communications. Companies like Intel, AMD, Nvidia, and others are expected to release CXL products.本文是关于 Compute Express Link (CXL) 标准的,该标准旨在通过建立用于将各种芯片连接在一起的行业标准协议来实现异构计算和可组合服务器架构。CXL 建立在现有的 PCIe 5.0 基础设施之上,但增加了一致性和低延迟内存事务。本文讨论了 CXL 对数据中心行业的重要性及其对内存池和交换的潜在影响。CXL 1.1 支持 CXL.io、CXL.cache 和 CXL.mem,版本 2.0 和 3.0 带来了内存共享和设备到设备通信等附加功能。预计 Intel、AMD、Nvidia 等公司将发布 CXL 产品。 | |
_2025-03-24_20:03:40_ | 2025-03-24 20:03:40 | CXL 深度解析:可组合服务器与异构计算的未来 - 知乎 | 原文链接失效了?试试备份 | TAGs:处理器 CXL | Summary: This article discusses the future of composable server architecture and heterogeneous compute using the CXL (Compute Express Link) interface. CXL is a standard for high-speed serial communication between processors and memory devices. The article covers the evolution of CXL from version 1 to 3 and the products and strategies of 20 companies in the field, including Intel, AMD, Nvidia, and others. The article emphasizes the importance of understanding the impact of CXL on data centers and the opportunities and challenges it presents for the semiconductor industry. The article also discusses the benefits of CXL for memory sharing and reducing DRAM requirements. The focus is on non-packaged features, with packaging features to be discussed in a separate article on UCIe. The article was originally published by WF Research, a professional research firm specializing in first principles.本文讨论了使用 CXL (Compute Express Link) 接口的可组合服务器架构和异构计算的未来。CXL 是处理器和内存设备之间高速串行通信的标准。本文涵盖了 CXL 从版本 1 到 3 的演变,以及该领域 20 家公司的产品和战略,包括 Intel、AMD、Nvidia 等。本文强调了了解 CXL 对数据中心的影响及其为半导体行业带来的机遇和挑战的重要性。本文还讨论了 CXL 在内存共享和降低 DRAM 需求方面的优势。重点是非打包功能,打包功能将在 UCIe 上的单独文章中讨论。本文最初由 WF Research 发表,这是一家专门研究第一性原理的专业研究公司。 | |
_2025-03-24_19:33:49_ | 2025-03-24 19:33:49 | 使用 CXL 提升存储堆栈或服务工作流的软硬件处理流水线-INTEL-腾讯云开发者社区-腾讯云 | 原文链接失效了?试试备份 | TAGs:DPU | Summary: This text is about the use of Intel's DPU (Data Processing Unit) with CXL (Compute Express Link) to enhance storage stack performance and service workflows. The article discusses the challenges of CPU+DPU cooperative processing, the benefits of using a single shared memory domain between CPU and DPU, and the advantages of using SPDK software stack for CPU+DPU cooperative processing with CXL. The article also mentions the importance of building high-performance storage solutions and the challenges of current accelerator offloading technology in meeting the growing demand for high-performance secure storage solutions.本文是关于将英特尔的 DPU(数据处理单元)与 CXL (Compute Express Link) 结合使用来增强存储堆栈性能和服务工作流程的。本文讨论了 CPU+DPU 协同处理的挑战、在 CPU 和 DPU 之间使用单个共享内存域的好处,以及使用 SPDK 软件堆栈进行 CPU+DPU 与 CXL 协同处理的优势。文章还提到了构建高性能存储解决方案的重要性,以及当前加速器卸载技术在满足对高性能安全存储解决方案日益增长的需求方面面临的挑战。 | |
_2025-03-24_10:26:06_ | 2025-03-24 10:26:06 | wfi __ RISC-V Specification for generic_rv64 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA | Summary: This page describes the behavior of the wfi (Wait For Interrupt) instruction in the RV64 generic architecture. The instruction causes the processor to enter a low-power state and wait for an interrupt. The behavior of wfi is influenced by the mstatus and hstatus registers. In certain modes and conditions, wfi may cause a trap leading to an Illegal Instruction or Virtual Instruction exception.本页介绍 RV64 通用体系结构中 wfi (等待中断) 指令的行为。该指令使处理器进入低功耗状态并等待中断。wfi 的行为受 mstatus 和 hstatus 寄存器的影响。在某些模式和条件下,wfi 可能会导致导致非法指令或虚拟指令异常的陷阱。 | |
_2025-03-24_10:07:02_ | 2025-03-24 10:07:02 | Linux 内核文档 — Linux 内核文档 - Linux 内核 | 原文链接失效了?试试备份 | TAGs:操作系统 linux | Summary: This page is the top-level of the Linux kernel documentation tree. It includes various manuals for developers and users, covering topics such as development processes, API usage, testing, and user space tools. The documentation is constantly improving and welcomes contributions. The page also provides links to documentation specific to various CPU architectures and other miscellaneous documents. Some documents may need to be translated or adapted to the reStructuredText format.此页面是 Linux 内核文档树的顶层。它包括面向开发人员和用户的各种手册,涵盖开发流程、API 使用、测试和用户空间工具等主题。文档不断改进,欢迎贡献。该页面还提供了指向特定于各种 CPU 体系结构的文档和其他杂项文档的链接。某些文档可能需要翻译或调整为 reStructuredText 格式。 | |
_2025-03-24_10:03:30_ | 2025-03-24 10:03:30 | 虚拟化支持 — Linux 内核文档 - Linux 内核 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 kvm | Summary: This page discusses various virtualization technologies and related documentation. Topics include KVM with its API and support for different systems like ARM, x86, and LoongArch. There's also a section on UML (Unikernel Microhypervisor) and how to create and run instances, as well as advanced themes and contributing. Other topics include Paravirt_ops, Nitro Enclaves, and the SEV (Secure Enclaves) and TDX (TensorFlow Datacenter) customer API documents. The page also covers Hyper-V enhancements, including VMBus, clock and timer support, PCI passthrough devices, and confidential computing virtual machines.本页讨论各种虚拟化技术和相关文档。主题包括 KVM 及其 API 以及对 ARM、x86 和 LoongArch 等不同系统的支持。还有一个部分介绍了 UML (Unikernel Microhypervisor) 以及如何创建和运行实例,以及高级主题和贡献。其他主题包括 Paravirt_ops、Nitro Enclaves 以及 SEV (Secure Enclaves) 和 TDX (TensorFlow Datacenter) 客户 API 文档。该页面还介绍了 Hyper-V 增强功能,包括 VMBus、时钟和计时器支持、PCI 直通设备和机密计算虚拟机。 | |
|
|
_2025-03-23_22:39:45_ | 2025-03-23 22:39:45 | Font Awesome,一套绝佳的图标字体库和CSS框架 | 原文链接失效了?试试备份 | TAGs:前端 | Summary: Font Awesome offers scalable vector icon options, allowing you to change various CSS attributes such as size, color, shadow, or other supported effects. | |
|
_2025-03-20_16:50:26_ | 2025-03-20 16:50:26 | RISC-V IOMMU support for RISC-V machines — QEMU documentation | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 | Summary: This text describes the implementation of RISC-V IOMMU (Input/Output Memory Management Unit) emulation in QEMU (Quick Emulator) version 9.2.90. The emulation includes a PCI reference device (riscv-iommu-pci) and a platform bus device (riscv-iommu-sys) for RISC-V machines. The PCI device can be added to the 'virt' RISC-V machine using the command line option '-device riscv-iommu-pci'. The IOMMU behavior is defined by the spec but its operation is OS dependent, with the current Linux kernel support (linux-v8) not yet fully feature-complete. The IOMMU emulation was tested using the Ventana Micro Systems kernel repository, which includes patches for KVM VFIO passthrough with irqbypass. The riscv-iommu-pci device can be configured with options such as bus, ioatc-limit, intremap, ats, off, s-stage, and g-stage. The riscv-iommu-sys device is implemented as a platform bus device for RISC-V boards and can be enabled using the 'iommu-sys' machine option.本文描述了 QEMU (Quick Emulator) 版本 9.2.90 中 RISC-V IOMMU (输入/输出内存管理单元) 仿真的实现。仿真包括用于 RISC-V 计算机的 PCI 参考设备 (riscv-iommu-pci) 和平台总线设备 (riscv-iommu-sys)。可以使用命令行选项 '-device riscv-iommu-pci' 将 PCI 设备添加到 'virt' RISC-V 机器上。IOMMU 行为由规范定义,但其作取决于作系统,当前的 Linux 内核支持 (linux-v8) 尚未完全完成功能。IOMMU 仿真使用 Ventana Micro Systems 内核存储库进行了测试,其中包括使用 irqbypass 的 KVM VFIO 直通补丁。riscv-iommu-pci 设备可以配置 bus、ioatc-limit、intremap、ats、off、s-stage 和 g-stage 等选项。riscv-iommu-sys 设备是作为 RISC-V 板的平台总线设备实现的,可以使用 'iommu-sys' 机器选项启用。 | |
_2025-03-20_16:47:51_ | 2025-03-20 16:47:51 | RISC-V AIA support for RISC-V machines — QEMU documentation | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 | Summary: The input describes the implementation of Advanced Interrupt Architecture (AIA) support in the virtRISC-V machine for TCG and KVM accelerators. There are two main modes: "aia=aplic" and "aia=aplic-imsic". The former adds one or more APLIC (Advanced Platform Level Interrupt Controller) devices, while the latter adds one or more APLIC devices and an IMSIC (Incoming MSI Controller) device for each CPU. The user behavior remains the same regardless of the accelerator used, but the emulated components change between userspace and kernel space depending on the accelerator. When running TCG, all controllers are emulated in userspace, while KVM provides no m-mode, resulting in no m-mode APLIC or IMSIC emulation. The table provided outlines how the AIA and accelerator options determine what is emulated in userspace.输入描述了 virtRISC-V 机器中对 TCG 和 KVM 加速器的高级中断架构 (AIA) 支持的实现。有两种主要模式:“aia=aplic”和“aia=aplic-imsic”。前者为每个 CPU 添加一个或多个 APLIC (高级平台级中断控制器) 设备,而后者为每个 CPU 添加一个或多个 APLIC 设备和一个 IMSIC (传入 MSI 控制器) 设备。无论使用何种加速器,用户行为都保持不变,但仿真组件在用户空间和内核空间之间会发生变化,具体取决于加速器。运行 TCG 时,所有控制器都在用户空间中仿真,而 KVM 不提供 m 模式,因此没有 m 模式 APLIC 或 IMSIC 仿真。提供的表格概述了 AIA 和 accelerator 选项如何确定在用户空间中模拟的内容。 | |
_2025-03-20_13:51:27_ | 2025-03-20 13:51:27 | 浏览器崩溃的第一性原理:内存管理的艺术 | 原文链接失效了?试试备份 | TAGs:内存 | Summary: This article discusses the cause of browser crashes, which is often related to memory management. The article focuses on V8's memory management principles, including its garbage collection mechanism, and common memory leak scenarios along with their prevention methods. The article concludes by emphasizing the importance of understanding these concepts to optimize code, reduce memory waste, and improve overall performance and user experience. The author encourages readers to engage in discussions in the comment section.本文讨论了浏览器崩溃的原因,这通常与内存管理有关。本文重点介绍了 V8 的内存管理原则,包括其垃圾回收机制、常见的内存泄漏场景及其预防方法。本文最后强调了理解这些概念对于优化代码、减少内存浪费以及提高整体性能和用户体验的重要性。作者鼓励读者在评论部分参与讨论。 | |
_2025-03-18_20:13:33_ | 2025-03-18 20:13:33 | [4_4] iommu_riscv_ Add support for Svnapot - Patchwork | 原文链接失效了?试试备份 | TAGs:处理器 risc-v IOMMU svnapot | Summary: This text is an email message containing a patch for the RISC-V IOMMU driver in the Linux kernel. The patch adds support for the Svnapot page size in the IOMMU driver. The email includes various headers and signatures indicating the source and history of the patch. The patch itself consists of adding new functions and modifying existing ones in the iommu.c file to handle Svnapot page sizes. The patch also includes comments and tests to ensure the correct functionality.此文本是一封电子邮件,其中包含 Linux 内核中 RISC-V IOMMU 驱动程序的补丁。此修补程序在 IOMMU 驱动程序中添加了对 Svnapot 页面大小的支持。该电子邮件包含各种标头和签名,用于指示补丁的来源和历史记录。补丁本身包括添加新函数和修改 iommu.c 文件中的现有函数,以处理 Svnapot 页面大小。该补丁还包括注释和测试,以确保功能正确。 | |
_2025-03-18_19:54:23_ | 2025-03-18 19:54:23 | 浅谈 IOMMU 的 page size 问题 - 知乎 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 IO IOMMU | Summary: This article discusses the issue of page sizes in IOMMU (Input/Output Memory Management Unit) for Intel and AMD platforms. The article also mentions the problems with the iommu_iova_to_phys() function and provides references to Intel and AMD specifications. Intel and AMD IOMMU have two main issues: page table level issues and page size issues. In this article, the focus is on the page size issues for Intel and AMD. | |
_2025-03-18_18:41:39_ | 2025-03-18 18:41:39 | [PATCH v10 1_1] riscv_ Allow to downgrade paging mode from the command line - Alexandre Ghiti | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 内存 satp | Summary: This is an email discussing a patch for the RISC-V Linux kernel that adds two early command line parameters to allow downgrading the satp (Supervisor Access Control Tags) mode from the command line. The patch also includes modifications to the kernel build system and various source files to support these new parameters. The patch was tested and reviewed by Björn Töpel.这是一封讨论 RISC-V Linux 内核补丁的电子邮件,该补丁添加了两个早期命令行参数,以允许从命令行降级 satp(主管访问控制标签)模式。该补丁还包括对内核构建系统和各种源文件的修改,以支持这些新参数。该补丁由 Björn Töpel 进行测试和审查。 | |
_2025-03-18_17:28:22_ | 2025-03-18 17:28:22 | qemu 共享内存设备——ivshmem-CSDN博客 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 qemu 内存 共享内存 | Summary: This text is about the use of ivshmem, a shared memory device in QEMU virtual machines. Ivshmem allows for efficient data transfer between virtual machine processes and the host, and is often used in virtualization scenarios such as antivirus software. The text explains how ivshmem works, its implementation in QEMU, and its benefits. It also mentions some related resources for further learning.本文介绍了 IVSHMEM 的使用,IVSHMEM 是 QEMU 虚拟机中的共享内存设备。Ivshmem 允许在虚拟机进程和主机之间进行高效的数据传输,并且通常用于虚拟化方案,例如防病毒软件。本文解释了 ivshmem 的工作原理、它在 QEMU 中的实现及其好处。它还提到了一些用于进一步学习的相关资源。 | |
_2025-03-18_17:03:07_ | 2025-03-18 17:03:07 | Kernel Samepage Merging — The Linux Kernel documentation | 原文链接失效了?试试备份 | TAGs:操作系统 linux 内存 ksm | Summary: The Linux Kernel's KSM (Kernel Shared Memory) feature is a memory-saving mechanism that merges identical pages in the system or application memory. It was introduced in version 2.6.32 and can be enabled by setting CONFIG_KSM=y. The KSM daemon, ksmd, periodically scans user memory for identical pages and merges them, reducing the overall memory usage. The merging process involves copying the content of the identical pages into a single write-protected page, which is then shared among the processes. | |
_2025-03-18_16:48:13_ | 2025-03-18 16:48:13 | Using Kernel Samepage Merging with KVM – The Linux Cluster | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 kvm 内存 ksm | Summary: The Linux Cluster Blog post discusses using Kernel Samepage Merging (KSM) with KVM for more efficient use of memory in Linux Cluster and Enterprise Linux systems. KSM is a Linux kernel feature that combines identical memory pages from multiple processes into one copy-on-write memory region. To verify KSM support, users can check the kernel configuration file and the number of kernel pages. Additionally, KVM guests need to request identical pages merging using the new madvise interface for KSM to take effect.Linux 集群博客文章讨论了将内核同页合并 (KSM) 与 KVM 结合使用,以便在 Linux 集群和企业 Linux 系统中更高效地使用内存。KSM 是一项 Linux 内核功能,它将来自多个进程的相同内存页合并到一个写入时复制内存区域。要验证 KSM 是否支持,用户可以检查内核配置文件和内核页数。此外,KVM 客户机需要使用新的 madvise 接口请求合并相同的页面,KSM 才能生效。 | |
_2025-03-18_11:50:25_ | 2025-03-18 11:50:25 | riscv_ Introduce 64K base page [LWN.net] | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 内存 | Summary: This email contains a patch series for introducing a larger base page size on RISC-V architecture, which currently only supports 4K pages. The patch aims to decouple software pages managed by the kernel from hardware pages managed by the MMU, allowing larger software base pages and reducing TLB misses. The patch series includes adaptations to various architecture codes and page table operations, and supports both bare metal and virtualization scenarios. Future work includes reducing memory usage, implementing isolation measures, and collaborating with folios, among other things. The patch series is based on v6.7-rc1 and contains changes to multiple files in the RISC-V kernel codebase.此电子邮件包含一个补丁系列,用于在 RISC-V 架构上引入更大的基本页面大小,该架构目前仅支持 4K 页面。该补丁旨在将内核管理的软件页面与 MMU 管理的硬件页面分离,从而允许更大的软件基本页面并减少 TLB 缺失。补丁系列包括对各种架构代码和页表作的适配,同时支持裸机和虚拟化场景。未来的工作包括减少内存使用、实施隔离措施以及与作品集协作等。补丁系列基于 v6.7-rc1,包含对 RISC-V 内核代码库中多个文件的更改。 | |
_2025-03-18_11:29:17_ | 2025-03-18 11:29:17 | Configure SR-IOV Network Virtual Functions in Linux_ KVM_ | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 IO SR-IOV | Summary: This tutorial demonstrates different methods of using Single Root Input/Output Virtualization (SR-IOV) network virtual functions (VFs) in Linux KVM virtual machines (VMs) and discusses the pros and cons of each method. The recommended method is using the KVM virtual network pool of SR-IOV adapters, which has the same performance as the VF PCI passthrough method but is easier to set up. The tutorial covers SR-IOV basics, supported Intel network interface cards, assumptions, network and system configuration, host and guest configuration, and scope. The author is a Platform Application Engineer at Intel.本教程演示了在 Linux KVM 虚拟机 (VM) 中使用单根输入/输出虚拟化 (SR-IOV) 网络虚拟功能 (VF) 的不同方法,并讨论了每种方法的优缺点。推荐的方法是使用 SR-IOV 适配器的 KVM 虚拟网络池,该池具有与 VF PCI 直通方法相同的性能,但更易于设置。本教程涵盖 SR-IOV 基础知识、支持的 Intel 网络接口卡、假设、网络和系统配置、主机和客户机配置以及范围。作者是 Intel 的一名平台应用工程师。 | |
_2025-03-18_10:19:15_ | 2025-03-18 10:19:15 | KVM_ Configure Intel X710 - E810 series NICs for High Performance | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 IO SR-IOV | Summary: This page is about setting up Intel X710 and E810 series NICs for high performance on F5 BIG-IP Virtual Edition (VE) in a KVM environment. The document explains the process of adding Intel IOMMU to the Linux grub file, modifying driver settings to enable SR-IOV, verifying the OS has loaded the Intel driver, installing the Intel Ice PF and IAVF drivers, upgrading NIC firmware using the supplied NVM tool, creating VFs, and deploying BIG-IP VE in KVM. The document also includes troubleshooting tips and related links.本页介绍如何在 KVM 环境中的 F5 BIG-IP Virtual Edition (VE) 上设置 Intel X710 和 E810 系列 NIC 以实现高性能。本文档介绍了将 Intel IOMMU 添加到 Linux grub 文件、修改驱动程序设置以启用 SR-IOV、验证作系统是否已加载 Intel 驱动程序、安装 Intel Ice PF 和 IAVF 驱动程序、使用提供的 NVM 工具升级 NIC 固件、创建 VF 以及在 KVM 中部署 BIG-IP VE 的过程。本文档还包括故障排除提示和相关链接。 | |
_2025-03-14_16:51:19_ | 2025-03-14 16:51:19 | Virtualization Tuning and Optimization Guide _ Red Hat Product Documentation | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 性能 | Summary: The Red Hat Enterprise Linux Virtualization Tuning and Optimization Guide focuses on enhancing KVM performance. It offers tips for utilizing KVM features and optimizing host systems and virtualized guests. KVM guests run as Linux processes on the host, with virtual CPUs (vCPUs) handled by the Linux scheduler. Resource allocation and virtual hardware configuration significantly impact guest performance. | |
|
_2025-03-12_15:53:18_ | 2025-03-12 15:53:18 | ext4文件镜像扩容方法详解 - DeepSeek - 探索未至之境 | 原文链接失效了?试试备份 | TAGs:操作系统 linux 文件系统 | Summary: The user is asking for methods to expand the size of an ext4 file system image. The process involves two main steps: increasing the size of the ext4 file system image itself, and then adjusting the internal ext4 file system to utilize the newly added space. | |
_2025-03-11_10:07:09_ | 2025-03-11 10:07:09 | qemu内存迁移流程_虚拟机的内存迁移可以包含哪几个阶段-CSDN博客 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 热迁移 | Summary: This text is about the process of migrating memory in QEMU using libvirt. It explains that the migration process involves setting up the destination QEMU, transferring the memory, and completing the migration by copying VMState. The text also mentures that the memory transfer can be limited by bandwidth, and that the first transfer of all dirty pages may require sending all memory pages. The text also mentions that there are two types of migration: pre-copy and post-copy. Pre-copy migration involves copying all memory pages to the destination before starting it, while post-copy migration involves copying pages as they are needed after the destination is started. The text also mentions that the choice between pre-copy and post-copy migration depends on the amount of memory being transferred and the available bandwidth. The text also mentions that there are different methods for transferring memory, including using TCP, Unix domain sockets, and executables. The text also explains that libvirt typically uses the fd method for transferring memory, which involves the upper application program (libvirt) opening a file descriptor and the qemu process only writing data to it. The text also explains that the libvirt process prepares the destination socket fd and passes it to qemu, which is done through Linux inter-process communication. The text also explains that the libvirt process passes the struct file pointer to qemu, allowing it to write information to the fd after libvirt. The text also mentions that this is similar to the situation with fork and child processes, but it requires using the method of passing file descriptors instead. The text also includes a diagram showing the source and destination qemu's memory transfer process.本文介绍了使用 libvirt 迁移 QEMU 中内存的过程。它解释了迁移过程包括设置目标 QEMU、传输内存以及通过复制 VMState 完成迁移。该文本还指出,内存传输可能会受到带宽的限制,并且所有脏页的第一次传输可能需要发送所有内存页。文本还提到有两种类型的迁移:pre-copy 和 post-copy。复制前迁移涉及在启动之前将所有内存页复制到目标,而复制后迁移涉及在启动目标后根据需要复制页面。正文还提到,在复制前和复制后迁移之间进行选择取决于要传输的内存量和可用带宽。文本还提到了传输内存的不同方法,包括使用 TCP、Unix 域套接字和可执行文件。该文本还解释了 libvirt 通常使用 fd 方法来传输内存,这涉及上层应用程序 (libvirt) 打开一个文件描述符,而 qemu 进程只向其写入数据。该文本还解释了 libvirt 进程准备目标套接字 fd 并将其传递给 qemu,这是通过 Linux 进程间通信完成的。该文本还解释了 libvirt 进程将结构文件指针传递给 qemu,允许它在 libvirt 之后将信息写入 fd。文本还提到,这类似于 fork 和子进程的情况,但它需要使用传递文件描述符的方法。文本还包括一个图表,其中显示了源和目标 qemu 的内存传输过程。 | |
_2025-03-10_17:32:56_ | 2025-03-10 17:32:56 | 高性能CPU微架构应该具有哪些特性-CSDN博客 | 原文链接失效了?试试备份 | TAGs:处理器 | Summary: This text is about CPU microarchitecture and its impact on performance. It discusses various aspects of CPU design, including instruction sets, microarchitecture, and system architecture. The text also mentions specific CPUs and their features, such as Intel's Core and AMD's Cortex-A9. The author emphasizes the importance of understanding microarchitecture to optimize performance and improve system design. The text also touches upon the concept of microservices and their architecture. Overall, the text provides a comprehensive overview of CPU design and its role in modern computing.本文是关于 CPU 微架构及其对性能的影响的。它讨论了 CPU 设计的各个方面,包括指令集、微架构和系统架构。文本还提到了特定的 CPU 及其功能,例如 Intel 的 Core 和 AMD 的 Cortex-A9。作者强调了了解微架构对于优化性能和改进系统设计的重要性。本文还涉及微服务的概念及其架构。总体而言,该文本全面概述了 CPU 设计及其在现代计算中的作用。 | |
_2025-03-10_17:25:48_ | 2025-03-10 17:25:48 | HotChips 2023_ Ventana 不寻常的 Veyron V1 - 知乎 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 产品 | Summary: This article discusses Ventana's unconventional Veyron V1 CPU design, focusing on its unique features. The Veyron V1 is an eight-way out-of-order core with a 15-cycle branch mispredict penalty and a large 12K item single-level BTB and a "single-cycle next-line predictor." It also has a 512KB L1/L2 instruction cache and a 64KB VIVT data cache. The design aims for a 3.6GHz target frequency but can be scaled down to reduce power consumption. The article also mentions Ventana's plans to improve the branch mispredict penalty in the V2 architecture. The article was originally published on ChipsAndCheese and is translated and adapted here with permission from the author.本文讨论了 Ventana 非常规的 Veyron V1 CPU 设计,重点介绍其独特的功能。威龙 V1 是一个八向乱序内核,具有 15 个周期的分支错误预测惩罚和一个大型 12K 项单级 BTB 和一个“单周期下线预测器”。它还具有 512KB L1/L2 指令缓存和 64KB VIVT 数据缓存。该设计的目标是 3.6GHz 的目标频率,但可以缩小以降低功耗。文章还提到了 Ventana 改进 V2 架构中分支错误预测惩罚的计划。本文最初发表在 ChipsAndCheese 上,经作者许可在此处翻译和改编。 | |
_2025-03-10_17:17:33_ | 2025-03-10 17:17:33 | DPDK virtio-net加载注意事项-lvyilong316-ChinaUnix博客 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 IO virtio | Summary: This text is about the temporary unavailability of publishing content on a blog due to server migration on Chinaunix.net from September 30, 14:00 to October 4, 08:00. The blog in question is lvyilong316, and it has 214 articles, 0 blog points, and 0 blog level. The user is a regular user who registered on January 23, 2013. The blog covers various topics related to Linux systems, including but not limited to networking, performance optimization, virtualization, and programming. The blog's archive includes articles from 2013 to 2025. The text also discusses the process of mapping resources for DPDK (Data Plane Development Kit) virtio-net using vfio and uio, as well as the differences between modern and legacy virtio PCI devices. The text also mentions the importance of opening IOMMU (Input/Output Memory Management Unit) in the VM for using vfio, and the differences in resource mapping for x86 and other architectures. The text concludes by discussing the differences in interrupt notifications for modern and legacy virtio PCI devices and the option to disable event_idx feature to avoid the need for frontend notification to the backend.本文内容涉及 9 月 30 日 14:00 至 10 月 4 日 08:00 期间,由于 Chinaunix.net 上的服务器迁移,在博客上发布内容暂时不可用。有问题的博客是 lvyilong316,它有 214 篇文章,0 个博客点,0 个博客级别。该用户是 2013 年 1 月 23 日注册的普通用户。该博客涵盖了与 Linux 系统相关的各种主题,包括但不限于网络、性能优化、虚拟化和编程。该博客的档案包括 2013 年至 2025 年的文章。本文还讨论了使用 vfio 和 uio 为 DPDK(数据平面开发套件)virtio-net 映射资源的过程,以及现代和传统 virtio PCI 设备之间的差异。文中还提到了在 VM 中打开 IOMMU(输入/输出内存管理单元)以使用 vfio 的重要性,以及 x86 和其他架构的资源映射差异。本文最后讨论了现代和传统 virtio PCI 设备的中断通知的差异,以及禁用event_idx功能以避免向后端发送前端通知的选项。 | |
_2025-03-10_17:02:13_ | 2025-03-10 17:02:13 | RISC-V最先进CPU微架构分析_rva23 profile-CSDN博客 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 产品 | Summary: The blog post discusses the advanced CPU microarchitectures of SIFIVE P870 and Veyron V1, both of which are based on the RISC-V instruction set. SIFIVE P870 follows the RV32GC profile and features a 6-decode processor, 96-entry integer issue queue, and 64KB L1/L2 instruction cache. Veyron V1 targets the server and automotive markets and has a 64KB Dcache, 512KB L1/L2 instruction cache, and a 2-cycle Icache and ITLB access delay. Both microarchitectures have a similar instruction set and have a similar number of integer and FP floating-point instructions. However, SIFIVE P870 has a deeper pipeline, which puts more pressure on the predictor and requires more power consumption. Veyron V1 has a more power-efficient design and a larger L2 TLB. Overall, both microarchitectures aim to provide high performance and low power consumption for their respective markets.该博客文章讨论了 SIFIVE P870 和 Veyron V1 的高级 CPU 微架构,这两者都基于 RISC-V 指令集。SIFIVE P870 遵循 RV32GC 配置文件,具有 6 解码处理器、96 条目整数发出队列和 64KB L1/L2 指令缓存。Veyron V1 面向服务器和汽车市场,具有 64KB Dcache、512KB L1/L2 指令缓存以及 2 周期 Icache 和 ITLB 访问延迟。两种微架构具有相似的指令集,并且具有相似数量的整数和 FP 浮点指令。但是,SIFIVE P870 具有更深的管道,这给预测器带来了更大的压力,并且需要更多的功耗。威龙 V1 具有更节能的设计和更大的 L2 TLB。总体而言,这两种微架构都旨在为各自的市场提供高性能和低功耗。 | |
_2025-03-10_16:50:53_ | 2025-03-10 16:50:53 | riscv-profiles_src_rva23-profile.adoc at main · riscv_riscv-profiles | 原文链接失效了?试试备份 | TAGs:处理器 risc-v RISC-V Software Ecosystem riscv-profiles | Summary: The RVA23 profiles are a set of specifications for RISC-V 64-bit application processors, with two profiles: RVA23U64 for user-mode and RVA23S64 for supervisor-mode. The RVA23 profiles aim to align implementations and enable binary software ecosystems to rely on a large set of guaranteed extensions and a small number of discoverable coarse-grain options. The profiles specify various mandatory and optional ISA features, including integer multiplication and division, atomic instructions, single- and double-precision floating-point instructions, vector extension, and more. The profiles also include extensions like instruction-fetch fence, control and status register access, and hardware performance counters.RVA23 配置文件是一组用于 RISC-V 64 位应用处理器的规范,包含两个配置文件:RVA23U64 用于用户模式,RVA23S64 用于管理器模式。RVA23 配置文件旨在调整实施,并使二进制软件生态系统能够依赖大量有保证的扩展和少量可发现的粗粒度选项。这些配置文件指定了各种强制性和可选的 ISA 功能,包括整数乘法和除法、原子指令、单精度和双精度浮点指令、向量扩展等。这些配置文件还包括 instruction-fetch fence、控制和状态寄存器访问以及硬件性能计数器等扩展。 | |
_2025-03-10_16:47:33_ | 2025-03-10 16:47:33 | RISC-V 软件:2024 年重大进展与 2025 年展望 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v RISC-V Software Ecosystem | Summary: SiFive Inc., a leading RISC-V chip manufacturer based in Taiwan, has seen significant progress in RISC-V software development in 2024. Key achievements include the enhancement of language runtime environments, the release of RISC-V optimization guidelines, and the support of RISC-V vector instructions in Linux kernels. In 2025, SiFive will focus on optimizing software for the recently introduced RVA23 Profile hardware. Additionally, SiFive is working on delivering optimized code for their processors, particularly in the field of artificial intelligence. The company has shown a reference software stack for running large language models on their RISC-V intelligent products and discussed internal test results at a webinar. SiFive is also collaborating with upstream data center companies to optimize and port various software stacks for RISC-V. While significant progress has been made in RISC-V software development, there is still work to be done, and SiFive plans to continue optimizing software and collaborating with RISE and other ecosystem partners to accelerate the growth of the RISC-V ecosystem.SiFive Inc. 是一家总部位于台湾的领先 RISC-V 芯片制造商,在 2024 年见证了 RISC-V 软件开发的重大进展。主要成就包括增强语言运行时环境、发布 RISC-V 优化指南以及在 Linux 内核中支持 RISC-V 矢量指令。2025 年,SiFive 将专注于优化最近推出的 RVA23 Profile 硬件的软件。此外,SiFive 正在努力为其处理器提供优化的代码,尤其是在人工智能领域。该公司展示了在其 RISC-V 智能产品上运行大型语言模型的参考软件堆栈,并在网络研讨会上讨论了内部测试结果。SiFive 还与上游数据中心公司合作,为 RISC-V 优化和移植各种软件堆栈。虽然 RISC-V 软件开发取得了重大进展,但仍有工作要做,SiFive 计划继续优化软件并与 RISE 和其他生态系统合作伙伴合作,以加速 RISC-V 生态系统的发展。 | |
_2025-03-10_15:36:23_ | 2025-03-10 15:36:23 | riscv kvm 方案代码调研 _ blog | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 虚拟化 | Summary: This text describes the virtualization of memory, CPU, timer, and interrupts in the context of a virtual machine using KVM (Kernel-based Virtual Machine) for RISC-V processors. The memory virtualization includes the conversion from Guest Physical Address (GPA) to Host Physical Address (HVA), with sub-steps for data structures and process analysis. The CPU virtualization involves the VCPU execution flow, including KVM_VCPU_RUN, vCPU scheduling, and hs timer tick details. The timer virtualization includes RISC-V timer support, user-level access, guest access with guest timer tick processing and guest time, and sstc vstimecmp. Interrupt virtualization includes PIC interrupt injection with registration and triggering processes, and AIA imsic interrupt processing with kvm_riscv_vcpu_aia_update, guest access to siselect and sireg, MMIO injection, and imsic doorbell interrupt.本文描述了使用 KISC-V 处理器的 KVM(基于内核的虚拟机)在虚拟机环境中对内存、CPU、定时器和中断进行虚拟化。内存虚拟化包括从来宾物理地址 (GPA) 到主机物理地址 (HVA) 的转换,以及用于数据结构和进程分析的子步骤。CPU 虚拟化涉及 VCPU 执行流程,包括 KVM_VCPU_RUN、vCPU 调度和 hs 计时器 tick 详细信息。计时器虚拟化包括 RISC-V 计时器支持、用户级访问、使用来宾计时器时钟周期处理和来宾时间的来宾访问以及 sstc vstimecmp。中断虚拟化包括带有注册和触发进程的 PIC 中断注入、带有 kvm_riscv_vcpu_aia_update 的 AIA imsic 中断处理、来宾对 siselect 和 sireg 的访问、MMIO 注入和 imsic 门铃中断。 | |
_2025-03-07_17:48:43_ | 2025-03-07 17:48:43 | RISC-V Day Tokyo|阎明铸分享SAIL-RISCV内存模型重构 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 内存 | Summary: At the RISC-V Day Tokyo 2025 Spring event, Hening Chang from the RUYISDK team at the Chinese Academy of Sciences Software Research Institute shared their achievements in restructuring the SAIL-RISCV memory model. They presented a poster on the challenges of SAIL-RISCV's memory model, which includes the lack of support for 34-bit physical addresses and the ambiguity between physical and virtual memory. The team addressed these challenges by restructuring the SAIL-RISCV memory model, enabling 34-bit physical address support and ensuring type safety. The improved memory model offers better flexibility, reduces coupling between physical and virtual memory, and provides a more precise memory abstraction for SAIL-RISCV. Hening Chang also emphasized the importance of continuous technological development and innovation in the RISC-V ecosystem.在 RISC-V Day 东京 2025 春季活动中,来自中科院软件研究所 RUYISDK 团队的 Hening Chang 分享了他们在重构 SAIL-RISCV 内存模型方面的成就。他们展示了一张关于 SAIL-RISCV 内存模型挑战的海报,其中包括缺乏对 34 位物理地址的支持以及物理内存和虚拟内存之间的歧义。该团队通过重构 SAIL-RISCV 内存模型、启用 34 位物理地址支持和确保类型安全来应对这些挑战。改进的内存模型提供了更好的灵活性,减少了物理内存和虚拟内存之间的耦合,并为 SAIL-RISCV 提供了更精确的内存抽象。张海宁还强调了 RISC-V 生态系统中持续技术发展和创新的重要性。 | |
_2025-03-07_12:05:21_ | 2025-03-07 12:05:21 | ARM_virtualization Performance and Architectural Implications - DeepSeek - 探索未至之境 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 性能 | Summary: This paper explores the performance of ARM virtualization on server hardware, specifically focusing on multi-core ARM systems and comparing two popular hypervisors, KVM and Xen, on both ARM and x86 platforms. The study reveals that ARM enables significantly faster transitions between a virtual machine (VM) and a Type 1 hypervisor like Xen compared to x86, but Type 2 hypervisors like KVM on ARMv8.0 have higher overhead for VM-to-hypervisor transitions. The researchers also discuss the impact of hypervisor software design and implementation on overall performance. They propose improvements to the ARM architecture, such as Virtualization Host Extensions (VHE), to bring Type 2 hypervisors' fast transition costs to real application workloads involving I/O. The research is significant as ARM servers are becoming increasingly common, and understanding their virtualization performance is crucial for hardware and software architects.本文探讨了 ARM 虚拟化在服务器硬件上的性能,特别关注多核 ARM 系统,并比较了 ARM 和 x86 平台上两种流行的虚拟机管理程序 KVM 和 Xen。研究表明,与 x86 相比,ARM 可以在虚拟机 (VM) 和 Xen 等 1 类管理程序之间实现更快的转换,但 ARMv8.0 上的 KVM 等 2 类管理程序具有更高的 VM 到管理程序转换开销。研究人员还讨论了虚拟机管理程序软件设计和实施对整体性能的影响。他们提出了对 ARM 架构的改进,例如虚拟化主机扩展 (VHE),以将 Type 2 虚拟机管理程序的快速转换成本引入涉及 I/O 的实际应用程序工作负载。随着 ARM 服务器变得越来越普遍,了解其虚拟化性能对于硬件和软件架构师来说至关重要,因此这项研究具有重要意义。 | |
_2025-03-07_11:08:51_ | 2025-03-07 11:08:51 | KVM性能分析工具-zhurunguang-ChinaUnix博客 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 性能 | Summary: This text is about an announcement on a Chinese tech blog, "ChinaUnix," regarding temporary service disruptions for blogging due to server migration from September 30 to October 4, 2025. The blog post also includes statistics on KVM performance events and their analysis. The blog is written by a user named "zhurunguang" and has 123 articles, 0 blog points, and 11 technical points. The blog's categories include virtualization, Linux commands, MySQL, shell scripts, and KVM performance analysis. The text also includes code snippets for enabling and disabling KVM tracing and analyzing KVM performance data.本文是关于中国科技博客“ChinaUnix”上的公告,该公告涉及 2025 年 9 月 30 日至 10 月 4 日期间由于服务器迁移而导致博客服务临时中断。该博客文章还包括有关 KVM 性能事件及其分析的统计数据。该博客由一位名为“zhurunguang”的用户撰写,有 123 篇文章,0 个博客点,11 个技术点。该博客的类别包括虚拟化、Linux 命令、MySQL、shell 脚本和 KVM 性能分析。该文本还包括用于启用和禁用 KVM 跟踪以及分析 KVM 性能数据的代码片段。 | |
_2025-03-07_11:05:33_ | 2025-03-07 11:05:33 | Troubleshooting KVM Virtualization Problem With Log Files in Linux - nixCraft | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 kvm | Summary: This article by Vivek Gite discusses troubleshooting KVM virtualization issues using log files on a Linux system. The author covers the locations of various log files related to KVM, including those for virt-install, virt-manager, and running virtual machines. The article also explains how to use Linux tools like grep and tail to view these logs. Additionally, it mentions the use of the virsh command to connect to guest serial consoles for troubleshooting. The article concludes by mentioning the existence of KVM configuration files in the /etc/libvirt/qemu/ directory and the installation of kvm-tools package for diagnostic and debugging tools.Vivek Gite 的这篇文章讨论了在 Linux 系统上使用日志文件对 KVM 虚拟化问题进行故障排除。作者介绍了与 KVM 相关的各种日志文件的位置,包括 virt-install、virt-manager 和正在运行的虚拟机的日志文件。本文还介绍了如何使用 grep 和 tail 等 Linux 工具查看这些日志。此外,它还提到了使用 virsh 命令连接到客户机串行控制台以进行故障排除。文章最后提到了 /etc/libvirt/qemu/ 目录下存在 KVM 配置文件,并安装了用于诊断和调试工具的 kvm-tools 软件包。 | |
_2025-03-07_10:51:08_ | 2025-03-07 10:51:08 | Perf events - KVM | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 性能 | Summary: This page explains how to use the Linux perf tool for counting and tracing performance events in the KVM kernel module. Previously, tools like kvm\_stat and kvm\_trace were used for this purpose, but now standard Linux tracing tools are used instead. The page covers counting and tracing events, recording events for the host and guest, and reporting events. The perf tool can be used to count events using the `perf stat` command, and detailed traces can be generated using ftrace. Events can be recorded to a file for later analysis, and the order of arguments is important when using the perf command. The page also mentions an alternative method of getting the guest's kallsyms and modules using sshfs and the --guestmount option.本页解释了如何使用 Linux perf 工具对 KVM 内核模块中的性能事件进行计数和跟踪。以前,kvm\_stat 和 kvm\_trace 等工具用于此目的,但现在使用标准的 Linux 跟踪工具。该页面涵盖计数和跟踪事件、记录主机和客户机的事件以及报告事件。perf 工具可用于使用 'perf stat' 命令对事件进行计数,并且可以使用 ftrace 生成详细的跟踪。可以将事件记录到文件中以供以后分析,并且在使用 perf 命令时,参数的顺序很重要。该页面还提到了使用 sshfs 和 --guestmount 选项获取客户机的 kallsym 和模块的替代方法。 | |
_2025-03-06_12:01:19_ | 2025-03-06 12:01:19 | 如何理解RISC-V中的hart_ - 知乎 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v | Summary: The term "hart" is used in the context of RISC-V to represent an abstract execution resource, as opposed to a software thread programming abstraction. It is a hardware thread and operates like an independent hardware thread from the perspective of software inside an execution environment. An execution environment can time-multiplex a set of guest harts onto fewer host harts, but they must operate independently and the environment must be able to preempt guest harts and not wait indefinitely for guest software to yield control. In simple terms, a hart is a hardware thread, similar to any other architecture's hardware thread. The difference between RISC-V cores and harts is the same as that between other architectures' cores and hardware threads - there is nothing new here. However, the concept of a hart can be implemented directly in hardware or virtualized, and it is a resource within an execution environment that has state and advances along executing a RISC-V instruction stream independently of other software inside the same execution environment.在 RISC-V 的上下文中,术语 “hart” 用于表示抽象执行资源,而不是软件线程编程抽象。它是一个硬件线程,从执行环境中的软件角度来看,它的运行方式类似于独立的硬件线程。执行环境可以将一组客户机 HART 定时多路复用到较少的主机 HART 上,但它们必须独立运行,并且环境必须能够抢占客户机 HART 并且不能无限期地等待客户机软件获得控制权。简单来说,HART 是一种硬件线程,类似于任何其他架构的硬件线程。RISC-V 核心和 harts 的区别与其他架构的 core 和硬件线程的区别相同——这里没有什么新鲜的。然而,HART 的概念可以直接在硬件中实现,也可以虚拟化实现,它是执行环境中的一种资源,在执行 RISC-V 指令流的过程中具有状态和进度,独立于同一执行环境中的其他软件。 | |
_2025-03-05_20:47:34_ | 2025-03-05 20:47:34 | 使用 trace-cmd 追踪内核 _ Linux 中国 - 知乎 | 原文链接失效了?试试备份 | TAGs:操作系统 linux 内核动态追踪 | Summary: This article introduces how to use the trace-cmd tool on Linux to trace kernel functions, as an alternative to using ftrace directly. The author explains that using trace-cmd makes the process easier and provides more features. The article covers installing trace-cmd, listing available tracers, starting and stopping function tracing, and adjusting the depth of the trace. The author also demonstrates how to filter the traced functions by name and track kernel modules. An example is given for tracing functions related to the ext4 file system. The article concludes with a brief mention of how to trace a specific PID.本文介绍了如何在 Linux 上使用 trace-cmd 工具来跟踪内核函数,作为直接使用 ftrace 的替代方案。作者解释说,使用 trace-cmd 可以简化该过程并提供更多功能。本文介绍了如何安装 trace-cmd、列出可用的跟踪器、启动和停止函数跟踪以及调整跟踪的深度。作者还演示了如何按名称过滤跟踪的函数并跟踪内核模块。给出了 ext4 文件系统相关的跟踪函数示例。本文最后简要提到了如何跟踪特定 PID。 | |
|
_2025-03-03_14:06:53_ | 2025-03-03 14:06:53 | HAPS-100_ High-Performance Scalable Prototyping System _ Synopsys | 原文链接失效了?试试备份 | TAGs:半导体 EDA | Summary: Synopsys is a leading provider of electronic design automation solutions and services, offering a range of products for design, verification, and manufacturing of semiconductors. Their offerings include silicon IP, verification IP, and design tools, as well as emulation and prototyping systems like HAPS-200 and ZeBu-200 for faster prototyping and software development. HAPS-100 is their highest performance and most scalable pre-silicon prototyping system, designed for software development and validation with the fastest performance and highest debug productivity.Synopsys 是电子设计自动化解决方案和服务提供商,提供一系列用于半导体设计、验证和制造的产品。他们的产品包括硅 IP、验证 IP 和设计工具,以及 HAPS-200 和 ZeBu-200 等仿真和原型系统,用于加快原型设计和软件开发。HAPS-100 是其最高性能和最具可扩展性的硅前原型系统,专为软件开发和验证而设计,具有最快的性能和最高的调试效率。 | |
_2025-03-02_17:17:38_ | 2025-03-02 17:17:38 | 首页 _ MiGPT GUI | 原文链接失效了?试试备份 | TAGs:AI应用 | Summary: This page introduces MiGPT, a GUI for securely and quickly integrating your XiaoAI box with artificial intelligence. It also provides additional GitHub links for further use and text-to-speech translation.本页介绍了 MiGPT,这是一个用于安全快速地将你的小爱盒子与人工智能集成的 GUI。它还提供了其他 GitHub 链接,以供进一步使用和文本到语音转换。 | |
_2025-03-02_13:58:48_ | 2025-03-02 13:58:48 | SR-IOV Configuration Guide—Intel® Ethernet CNA X710 & XL710 on SuSE Linux_ Enterprise Server 12__ Technical Brief | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 IO SR-IOV | Summary: This page is about the configuration guide for using Intel Ethernet CNA X710 and XL710 series adapters on SuSE Linux Enterprise Server 12 for SR-IOV. The page introduces the Intel Ethernet Controllers X710 and XL710, and the Intel Ethernet Converged Network Adapter X710 Series and Intel Ethernet Server Adapter XL710. The page explains that these ethernet products offer world-class 40 GbE support and compatibility with popular Linux distributions for I/O virtualization. The usage instructions and company information are also provided.本页介绍在 SuSE Linux Enterprise Server 12 for SR-IOV 上使用 Intel 以太网 CNA X710 和 XL710 系列适配器的配置指南。本页介绍了 Intel 以太网控制器 X710 和 XL710,以及 Intel 以太网融合网络适配器 X710 系列和 Intel 以太网服务器适配器 XL710。该页面介绍了这些以太网产品提供世界一流的 40 GbE 支持,并与流行的 Linux 发行版兼容,以实现 I/O 虚拟化。此外,还提供了使用说明和公司信息。 | |
_2025-03-02_11:24:49_ | 2025-03-02 11:24:49 | riscv-non-isa_riscv-server-platform_ The RISC-V Server Platform specification defines a standardized set of hardware and sofwa | 原文链接失效了?试试备份 | TAGs:处理器 risc-v server | Summary: The RISC-V Server Platform is a specification that outlines standardized hardware and software capabilities for portable system software, such as operating systems and hypervisors, in RISC-V servers. The document includes information about the specification, its history, and dependencies. Users can clone the project, build the PDF using the Makefile, and view the document's topics, which include platform, server, os, standards, interoperability, UEFi, hypervisors, ACPI, risc-v, and BRS-I. The project is licensed under a Creative Commons Attribution 4.0 International License and has 11 stars, 5 forks, and 8 watchers.RISC-V 服务器平台是一项规范,概述了 RISC-V 服务器中便携式系统软件(如作系统和虚拟机管理程序)的标准化硬件和软件功能。该文档包含有关规范、其历史记录和依赖项的信息。用户可以克隆项目,使用 Makefile 构建 PDF,并查看文档的主题,包括平台、服务器、作系统、标准、互作性、UEFi、虚拟机管理程序、ACPI、risc-v 和 BRS-I。该项目根据 Creative Commons Attribution 4.0 International License 获得许可,并拥有 11 颗星、5 个分叉和 8 个观察者。 | |
_2025-03-02_10:46:18_ | 2025-03-02 10:46:18 | 首页 _ MiGPT GUI | 原文链接失效了?试试备份 | TAGs:AI应用 | Summary: This page introduces MiGPT, a GUI for securely and quickly integrating your Xiaoice box with artificial intelligence. It also provides additional GitHub links for further use and text-to-speech translation.本页介绍了 MiGPT,这是一个用于安全快速地将小冰盒与人工智能集成的 GUI。它还提供了其他 GitHub 链接,以供进一步使用和文本到语音转换。 | |
_2025-02-28_18:20:02_ | 2025-02-28 18:20:02 | deepseek-ai_DeepGEMM: DeepGEMM: 干净高效的 FP8 GEMM 内核,具有细粒度扩展 --- deepseek-ai_DeepGEMM_ DeepGEMM_ clean and efficient FP8 GEMM ker | 原文链接失效了?试试备份 | TAGs:大模型 | Summary: DeepGEMM is a library designed for clean and efficient FP8 General Matrix Multiplications (GEMMs) with fine-grained scaling, using Hopper architecture GPUs and CUDA 12.3 or above. It supports both normal and Mix-of-Experts (MoE) grouped GEMMs, with various optimizations such as warp-specialization, TMA features, and a fully JIT design. The library also provides utility functions and environment variables.DeepGEMM 是一个库,旨在使用 Hopper 架构 GPU 和 CUDA 12.3 或更高版本,实现干净高效的 FP8 通用矩阵乘法 (GEMM),进行精细缩放。它支持普通和混合专家 (MoE) 分组 GEMM,具有各种优化,例如 warp 专业化、TMA 功能和完全 JIT 设计。该库还提供实用程序函数和环境变量。 | |
_2025-02-28_14:05:35_ | 2025-02-28 14:05:35 | tech-attached-matrix-extension@lists.riscv.org _RISC-V AME 扩展的 SiFive 提案 --- tech-attached-matrix-extension@lists.riscv.org _ | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA Matrix | Summary: The text discusses a feedback exchange between team members regarding the Zvma Attached Matrix Extension (AME) proposal. The team, T1, has reviewed the proposal and finds it clear and elegant, offering significant matrix computation bandwidth while maintaining software-friendliness. They suggest some improvements, including making the data layout microarchitecture (uarch) defined instead of locked at the ISA level, addressing consistency challenges in multi-core scenarios, and specifying that matrix data should be marked as "unspecified" following any matrix configuration change. They also discuss concerns about physical design friendliness, specifically the placement of matrix computation logic alongside RAM/flop states and the potential routing contention between the computation block and on-chip memory. They emphasize the importance of addressing these issues to ensure high-performance computation and alignment across different uarch designs.本文讨论了团队成员之间关于 Zvma 附加矩阵扩展 (AME) 提案的反馈交流。T1 团队审查了该提案,发现它清晰而优雅,在保持软件友好性的同时提供了大量的矩阵计算带宽。他们提出了一些改进建议,包括在 ISA 级别定义而不是锁定数据布局微架构 (uarch),解决多核场景中的一致性挑战,以及指定在任何矩阵配置更改后应将矩阵数据标记为 “unspecified”。他们还讨论了对物理设计友好性的担忧,特别是矩阵计算逻辑与 RAM/flop 状态一起放置,以及计算块和片上存储器之间潜在的布线争用。他们强调了解决这些问题的重要性,以确保不同 uarch 设计之间的高性能计算和对齐。 | |
_2025-02-28_14:00:26_ | 2025-02-28 14:00:26 | tech-attached-matrix-extension@lists.riscv.org _RISC-V AME 扩展的 SiFive 提案 --- tech-attached-matrix-extension@lists.riscv.org _ | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA Matrix | Summary: This text is a discussion between team members regarding the Zvma Attached Matrix Extension (AME) proposal. They find the specification clear and elegant, offering significant matrix computation bandwidth while maintaining software-friendliness. However, they have some concerns about physical design friendliness, specifically the placement of matrix computation logic alongside RAM/flop states and the potential routing contention between the computation block and on-chip memory. They suggest making the data layout microarchitecture (uarch) defined instead of locked at the ISA level and allowing alternative punning schemes in future extensions for greater flexibility. They also recommend specifying that matrix data should be marked as "unspecified" following any matrix configuration change. The team is considering the uarch based on the proposal and looks forward to continued collaboration as Zvma progresses toward ratification.本文是团队成员之间关于 Zvma 附加矩阵扩展 (AME) 提案的讨论。他们发现规范清晰而优雅,在保持软件友好性的同时提供了大量的矩阵计算带宽。然而,他们对物理设计友好性有一些担忧,特别是矩阵计算逻辑与 RAM/flop 状态一起放置,以及计算块和片上存储器之间潜在的路由争用。他们建议在 ISA 级别定义而不是锁定数据布局微架构 (uarch),并在未来的扩展中允许使用其他双关模式以获得更大的灵活性。他们还建议指定在矩阵配置更改后应将矩阵数据标记为“未指定”。该团队正在根据该提案考虑 uarch,并期待在 Zvma 获得批准的过程中继续合作。 | |
_2025-02-28_13:40:48_ | 2025-02-28 13:40:48 | 玄铁矩阵乘法扩展说明 – RISC-V International --- XuanTie Matrix Multiply Extension Instructions – RISC-V International | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA | Summary: The text discusses the XuanTie Matrix Multiply Extension (MME) for RISC-V processors, designed to meet the demands for AI computing power with independent matrix extensions. The benefits of independent matrix extensions include independent programming models, developer-friendly design, and simplified hardware implementation. The XuanTie MME includes matrix multiply-accumulate instructions, matrix load/store instructions, and other matrix computations to improve AI computing power. The extension supports various data types and sizes and is scalable, portable, and decoupled from vector extensions. The design has been open-sourced on GitHub for further development.本文讨论了用于 RISC-V 处理器的 XuanTie 矩阵乘法扩展 (MME),旨在通过独立的矩阵扩展满足对 AI 计算能力的需求。独立矩阵扩展的优势包括独立的编程模型、开发人员友好的设计和简化的硬件实现。炫铁 MME 包括矩阵乘法累加指令、矩阵加载/存储指令和其他矩阵计算,以提高 AI 计算能力。该扩展支持各种数据类型和大小,并且可扩展、可移植,并且与矢量扩展分离。该设计已在 GitHub 上开源,以供进一步开发。 | |
_2025-02-28_13:35:26_ | 2025-02-28 13:35:26 | 从向量到矩阵:RISC-V 矩阵扩展的未来 - 知乎 --- From Vector to Matrix_ The Future of RISC-V Matrix Extensions - 知乎 | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA | Summary: This text is about the development and future possibilities of Matrix Extensions in RISC-V, a open-source Instruction Set Architecture (ISA). The text discusses various matrix extension proposals, such as Integrated Matrix Extension from Spacemit and Attached Matrix Extension from Xuantie, Stream Computing, and SiFive (Zvma). The author compares the tradeoffs between Integrated and Attached Matrix Extensions, and the relationship between existing Vector Extensions and Matrix Extensions. The text also explores potential hardware implementations of RISC-V matrix acceleration.本文介绍了 RISC-V(一种开源指令集架构 (ISA))中矩阵扩展的开发和未来可能性。本文讨论了各种矩阵扩展提案,例如 Spacemit 的 Integrated Matrix Extension 和 Xuantie 的 Attached Matrix Extension、Stream Computing 和 SiFive (Zvma)。作者比较了 Integrated Matrix Extensions 和 Attached Matrix Extensions 之间的权衡,以及现有 Vector Extensions 和 Matrix Extensions 之间的关系。本文还探讨了 RISC-V 矩阵加速的潜在硬件实现。 | |
_2025-02-28_10:11:19_ | 2025-02-28 10:11:19 | GiantVM_ a type-II hypervisor implementing many-to-one virtualization _ Proceedings of the 16th ACM SIGPLAN_SIGOPS Internation | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 多虚一 | Summary: This paper introduces GiantVM, an open-source distributed hypervisor that provides many-to-one virtualization to aggregate resources from multiple physical machines and offers a uniform hardware abstraction for guest OSes. GiantVM combines the benefits of scale-up and scale-out solutions, enabling unmodified applications to run with a huge amount of physical resources. It also leverages distributed shared memory to achieve aggregation of memory and proposes techniques to deal with the challenges of CPU and I/O virtualization in distributed environments. The authors have implemented GiantVM based on the state-of-the-art type-II hypervisor QEMU-KVM and it can currently host conventional OSes such as Linux. Evaluations show that GiantVM outperforms Spark by up to 3.4X with two text-processing programs.本文介绍了 GiantVM,这是一种开源分布式管理程序,它提供多对一虚拟化来聚合来自多个物理机的资源,并为来宾作系统提供统一的硬件抽象。GiantVM 结合了纵向扩展和横向扩展解决方案的优势,使未经修改的应用程序能够使用大量物理资源运行。它还利用分布式共享内存来实现内存聚合,并提出了应对分布式环境中 CPU 和 I/O 虚拟化挑战的技术。作者基于最先进的 II 类管理程序 QEMU-KVM 实现了 GiantVM,它目前可以托管 Linux 等传统作系统。评估表明,GiantVM 使用两个文本处理程序的性能比 Spark 高出 3.4 倍。 | |
_2025-02-28_09:59:02_ | 2025-02-28 09:59:02 | GiantVM | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 产品 多虚一 | Summary: GiantVM is a distributed hypervisor that utilizes resources from multiple physical machines, offering a uniform hardware abstraction to the guest OS through techniques like IPI, interrupt, and I/O forwarding. It's based on QEMU-KVM and distributed shared memory (DSM) technology. The input includes instructions on how to install and run GiantVM on a single machine using Ubuntu 16.04.7 and Linux-DSM. The paper "GiantVM: A Type-II Hypervisor Implementing Many-to-one Virtualization" was published in the Proceedings of the 16th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments in 2020. The authors include Zhengwei Qi, Haibing Guan, and others from Shanghai Jiao Tong University.GiantVM 是一个分布式管理程序,它利用来自多个物理机的资源,通过 IPI、中断和 I/O 转发等技术为来宾作系统提供统一的硬件抽象。它基于 QEMU-KVM 和分布式共享内存 (DSM) 技术。输入包括有关如何使用 Ubuntu 16.04.7 和 Linux-DSM 在单台计算机上安装和运行 GiantVM 的说明。论文《GiantVM:实现多对一虚拟化的 Type-II Hypervisor》发表在 2020 年第 16 届 ACM SIGPLAN/SIGOPS 虚拟执行环境国际会议的论文集上。作者包括来自上海交通大学的 Zhengwei Qi、Haibing Guan 等人。 | |
_2025-02-27_17:36:45_ | 2025-02-27 17:36:45 | 清华大学出版社-图书详情-《深入浅出系统虚拟化:原理与实践》 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 书 | Summary: This book is a professional text on system virtualization. It covers the principles and practices of system virtualization, including its history, trends, and main functions and classifications. The book also introduces various types of virtualization systems, such as openEuler, and explains how they implement CPU and interrupt virtualization using QEMU/KVM and GiantVM. The book is written by Qi Zhengwei, a professor at Shanghai Jiao Tong University, and is aimed at readers with a basic understanding of hardware architecture and operating systems. It is a comprehensive resource for understanding system virtualization technology. | |
_2025-02-27_17:07:00_ | 2025-02-27 17:07:00 | GiantVM_ GiantVM,又称巨型虚拟机,是一个基于 QEMU-KVM 的_多虚一_分布式Type II型hypervisor | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 产品 | Summary: GiantVM is a distributed Type II hypervisor developed by the Shanghai Jiao Tong University Can-Extend Computing and System Experiment Laboratory Can-Trust Cloud Computing Team. It manages resources from multiple physical machines and provides a unified hardware abstraction for customer operating systems. GiantVM uses RDMA technology for hardware abstraction at the virtualization level, and provides distributed QEMU for cross-node virtual machine abstraction, KVM for lower-level physical machine management, and low-latency distributed shared memory DSM. GiantVM has two main code repositories: Linux-DSM and QEMU, each containing corresponding modifications to KVM and QEMU. To join, send an application email to xiangyuxin@sjtu.edu.cn. Gitee is a Chinese open-source code hosting platform with features like GitHub, including importing GitHub repositories and Git commands.GiantVM 是由上海交通大学 Can-Extend 计算与系统实验实验室 Can-Trust 云计算团队开发的分布式 Type II 管理程序。它管理来自多个物理机的资源,并为客户作系统提供统一的硬件抽象。GiantVM 在虚拟化级别使用 RDMA 技术进行硬件抽象,并提供分布式 QEMU 用于跨节点虚拟机抽象,KVM 用于低级物理机管理,以及低延迟分布式共享内存 DSM。GiantVM 有两个主要的代码仓库:Linux-DSM 和 QEMU,每个代码仓库都包含对 KVM 和 QEMU 的相应修改。要加入,请向 xiangyuxin@sjtu.edu.cn 发送申请电子邮件。Gitee 是一个中国开源代码托管平台,具有 GitHub 等功能,包括导入 GitHub 仓库和 Git 命令。 | |
_2025-02-27_11:58:26_ | 2025-02-27 11:58:26 | DeepSeek开源周总结和感悟【更新至第三天】 - 知乎 | 原文链接失效了?试试备份 | TAGs:处理器 异构计算 软硬协同 大模型 | Summary: This text is a summary of a blog post about DeepSeek, an open-source project that optimizes AI performance on specific hardware. The author expresses their feelings about the challenges of completing such work in foreign AI companies due to hardware restrictions. They highlight three projects, FlashMLA, DeepEP, and DeepGEMM, and their contributions to improving AI performance on limited hardware. The author emphasizes the importance of understanding both AI models and hardware for optimal performance and the potential impact of DeepSeek on the industry.本文是关于 DeepSeek 的博客文章的摘要,DeepSeek 是一个开源项目,可优化特定硬件上的 AI 性能。作者表达了他们对由于硬件限制而在国外 AI 公司完成此类工作所面临的挑战的感受。他们重点介绍了 FlashMLA、DeepEP 和 DeepGEMM 三个项目,以及它们对在有限硬件上提高 AI 性能的贡献。作者强调了了解 AI 模型和硬件以实现最佳性能的重要性,以及 DeepSeek 对行业的潜在影响。 | |
_2025-02-27_11:32:05_ | 2025-02-27 11:32:05 | 芯华章X-EPIC _ 从芯定义智慧未来 | 原文链接失效了?试试备份 | TAGs:半导体 EDA | Summary: X-Epic Tech is a Chinese EDA (Electronic Design Automation) company that focuses on creating agile verification solutions from chips to systems. They have developed over 200 patent applications and released several commercial-level verification products based on platformization, intelligence, and cloudization. Their offerings include a comprehensive digital verification full process EDA tool and seven product series covering hardware emulation systems, FPGA prototype verification systems, intelligent scene verification, static and formal verification, logic emulation, system debugging, and verification clouds. They have collaborated with various partners to enhance their RISC-V chip verification and testing solutions, and have received certifications from international standards such as ISO 26262. They have also received strategic investments from major funds and companies. Their research institute was established in 2017 to develop the next generation of EDA 2.0 technology. They have also opened an open-source EDA technology community, EDAGit.com, to lower the threshold for using formal verification tools. They have received numerous awards for their high-performance FPGA prototype verification system and high-performance hardware emulation system. They can be contacted for communications, business sales, recruitment, and foreign research collaborations.X-Epic Tech 是一家中国的 EDA(电子设计自动化)公司,专注于创建从芯片到系统的敏捷验证解决方案。他们开发了 200 多项专利申请,并发布了多款基于平台化、智能化和云化的商业级验证产品。他们的产品包括全面的数字验证全流程 EDA 工具和七大产品系列,涵盖硬件仿真系统、FPGA 原型验证系统、智能场景验证、静态和形式化验证、逻辑仿真、系统调试和验证云。他们与各种合作伙伴合作,以增强其 RISC-V 芯片验证和测试解决方案,并已获得 ISO 26262 等国际标准的认证。他们还获得了来自主要基金和公司的战略投资。他们的研究所成立于 2017 年,旨在开发下一代 EDA 2.0 技术。他们还开设了一个开源的 EDA 技术社区 EDAGit.com,以降低使用形式化验证工具的门槛。他们凭借其高性能 FPGA 原型验证系统和高性能硬件仿真系统获得了无数奖项。可以联系他们进行沟通、业务销售、招聘和外国研究合作。 | |
|
|
_2025-02-26_16:35:36_ | 2025-02-26 16:35:36 | RISC-V Non-ISA Specifications | 原文链接失效了?试试备份 | TAGs:处理器 risc-v | Summary: This text describes the RISC-V Non-ISA Specifications repository on GitHub, which contains non-instruction set architecture specifications for RISC-V. These specifications include documentation, architecture tests, and specifications for various interfaces and tools. The repository includes several sub-repositories, each focusing on different aspects of the RISC-V ecosystem. The text also mentions that the repository does not modify the RISC-V Instruction Set Architecture and provides a list of popular repositories. | |
|
|
_2025-02-26_11:29:14_ | 2025-02-26 11:29:14 | tech-announce@lists.riscv.org _ Public review for Smctr_Ssctr ISA extensions | 原文链接失效了?试试备份 | TAGs:处理器 risc-v ISA | Summary: The RISC-V Foundation has initiated a public review period for the proposed Control Transfer Records (Smctr/Ssctr) standard extensions to the RISC-V Instruction Set Architecture (ISA). The review period, which runs from July 23 to August 22, 2024, invites feedback via email or GitHub. The extensions, described in the PDF specification available at github.com, aim to correct and incorporate minor changes during the review process. The Privileged ISA Committee will recommend approval and ratification upon completion of the review. (by Mozilla Orbit AI) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2025_2_13_09:59:17_ | 2025_2_13 09:59:17 | 首页 · 魔搭社区 | 原文链接失效了?试试备份 | TAGs:AI应用 | saved date: Thu Feb 13 2025 09:59:17 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2025_1_24_19:58:22_ | 2025_1_24 19:58:22 | Veripool | 原文链接失效了?试试备份 | TAGs: | saved date: Fri Jan 24 2025 19:58:22 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
_2025_1_23_15:44:27_ | 2025_1_23 15:44:27 | Veripool | 原文链接失效了?试试备份 | TAGs:处理器 验证 | saved date: Thu Jan 23 2025 15:44:27 GMT+0800 (中国标准时间) | |
|
|
|
_2025_1_23_15:14:47_ | 2025_1_23 15:14:47 | | 原文链接失效了?试试备份 | TAGs:处理器 验证 | saved date: Thu Jan 23 2025 15:14:47 GMT+0800 (中国标准时间) | |
_2025_1_23_15:14:41_ | 2025_1_23 15:14:41 | | 原文链接失效了?试试备份 | TAGs:处理器 验证 | saved date: Thu Jan 23 2025 15:14:41 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2025_1_9_17:52:35_ | 2025_1_9 17:52:35 | | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 书 | saved date: Thu Jan 09 2025 17:52:35 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_12_20_18:23:05_ | 2024_12_20 18:23:05 | 指南 | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 机密计算 | saved date: Fri Dec 20 2024 18:23:05 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_11_14_12_47_30_ | 2024_11_14 12_47_30 | | 原文链接失效了?试试备份 | TAGs:处理器 risc-v 安全 | saved date: Thu Nov 14 2024 12:47:30 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
_2024_11_13_16_27_01_ | 2024_11_13 16_27_01 | | 原文链接失效了?试试备份 | TAGs:调试 | saved date: Wed Nov 13 2024 16:27:01 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
_2024_11_13_10_35_34_ | 2024_11_13 10_35_34 | 高端调试 | 原文链接失效了?试试备份 | TAGs:调试 | saved date: Wed Nov 13 2024 10:35:34 GMT+0800 (中国标准时间) | |
|
_2024_11_13_02_25_57_ | 2024_11_13 02_25_57 | 及 Linux 技术路线 | 原文链接失效了?试试备份 | TAGs: | saved date: Wed Nov 13 2024 02:25:57 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_10_30_19_34_28_ | 2024_10_30 19_34_28 | | 原文链接失效了?试试备份 | TAGs:语言 编译 | saved date: Wed Oct 30 2024 19:34:28 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
_2024_10_30_15_55_31_ | 2024_10_30 15_55_31 | - 知乎 | 原文链接失效了?试试备份 | TAGs:设备与驱动 设备树 | saved date: Wed Oct 30 2024 15:55:31 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_10_26_16_34_33_ | 2024_10_26 16_34_33 | kernelnote | 原文链接失效了?试试备份 | TAGs: | saved date: Sat Oct 26 2024 16:34:33 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
_2024_10_25_16_21_51_ | 2024_10_25 16_21_51 | | 原文链接失效了?试试备份 | TAGs:处理器 中断 用户态中断 | saved date: Fri Oct 25 2024 16:21:51 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_10_15_22_43_49_ | 2024_10_15 22_43_49 | No title | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Oct 15 2024 22:43:49 GMT+0800 (中国标准时间) | |
|
|
_2024_10_15_22_32_25_ | 2024_10_15 22_32_25 | -vIOMMU - 知乎 | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Oct 15 2024 22:32:25 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_10_15_21_57_17_ | 2024_10_15 21_57_17 | -腾讯云开发者社区-腾讯云 | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Oct 15 2024 21:57:17 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_10_15_21_48_01_ | 2024_10_15 21_48_01 | 项目概述 _ 一生一芯 | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Oct 15 2024 21:48:01 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
_2024_10_15_21_44_06_ | 2024_10_15 21_44_06 | No title | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Oct 15 2024 21:44:06 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
_2024_10_11_11_46_19_ | 2024_10_11 11_46_19 | | 原文链接失效了?试试备份 | TAGs: | saved date: Fri Oct 11 2024 11:46:19 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_8_30_11_19_15_ | 2024_8_30 11_19_15 | No title | 原文链接失效了?试试备份 | TAGs: | saved date: Fri Aug 30 2024 11:19:15 GMT+0800 (中国标准时间) | |
_2024_8_30_11_19_07_ | 2024_8_30 11_19_07 | 微信公众平台 | 原文链接失效了?试试备份 | TAGs: | saved date: Fri Aug 30 2024 11:19:07 GMT+0800 (中国标准时间) | |
_2024_8_30_11_19_03_ | 2024_8_30 11_19_03 | 微信公众平台 | 原文链接失效了?试试备份 | TAGs: | saved date: Fri Aug 30 2024 11:19:03 GMT+0800 (中国标准时间) | |
|
|
|
|
_2024_8_29_16_32_59_ | 2024_8_29 16_32_59 | 龙芯开源社区 | 原文链接失效了?试试备份 | TAGs:处理器 龙芯 | saved date: Thu Aug 29 2024 16:32:59 GMT+0800 (中国标准时间) | |
_2024_8_29_14_42_03_ | 2024_8_29 14_42_03 | 一生一芯 | 原文链接失效了?试试备份 | TAGs:处理器 | saved date: Thu Aug 29 2024 14:42:03 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_8_22_02_00_23_ | 2024_8_22 02_00_23 | smcdef - 知乎 | 原文链接失效了?试试备份 | TAGs: | saved date: Thu Aug 22 2024 02:00:23 GMT+0800 (中国标准时间) | |
_2024_8_22_02_00_20_ | 2024_8_22 02_00_20 | TLB原理 - 知乎 | 原文链接失效了?试试备份 | TAGs: | saved date: Thu Aug 22 2024 02:00:20 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_8_20_22_48_10_ | 2024_8_20 22_48_10 | No title | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Aug 20 2024 22:48:10 GMT+0800 (中国标准时间) | |
|
|
|
|
_2024_8_20_22_46_42_ | 2024_8_20 22_46_42 | P4 学习笔记 - 知乎 | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Aug 20 2024 22:46:42 GMT+0800 (中国标准时间) | |
|
_2024_8_20_22_46_36_ | 2024_8_20 22_46_36 | 旋转图片验证 | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Aug 20 2024 22:46:36 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_8_4_17_39_40_ | 2024_8_4 17_39_40 | - 知乎 | 原文链接失效了?试试备份 | TAGs:处理器 GPU | saved date: Sun Aug 04 2024 17:39:40 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
_2024_8_2_23_34_53_ | 2024_8_2 23_34_53 | 逸集晟 | 原文链接失效了?试试备份 | TAGs:半导体 | saved date: Fri Aug 02 2024 23:34:53 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
_2024_8_2_17_32_38_ | 2024_8_2 17_32_38 | | 原文链接失效了?试试备份 | TAGs:处理器 GPU 内存 | saved date: Fri Aug 02 2024 17:32:38 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_8_1_16_14_15_ | 2024_8_1 16_14_15 | 围剿英伟达 | 原文链接失效了?试试备份 | TAGs:处理器 GPU | saved date: Thu Aug 01 2024 16:14:15 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_7_25_23_37_40_ | 2024_7_25 23_37_40 | | 原文链接失效了?试试备份 | TAGs: | saved date: Thu Jul 25 2024 23:37:40 GMT+0800 (中国标准时间) | |
|
_2024_7_25_14_00_12_ | 2024_7_25 14_00_12 | | 原文链接失效了?试试备份 | TAGs:处理器 安全 | saved date: Thu Jul 25 2024 14:00:12 GMT+0800 (中国标准时间) | |
|
_2024_7_23_22_16_41_ | 2024_7_23 22_16_41 | Hello 算法 | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Jul 23 2024 22:16:41 GMT+0800 (中国标准时间) | |
|
_2024_7_23_20_07_13_ | 2024_7_23 20_07_13 | Hello 算法 | 原文链接失效了?试试备份 | TAGs:算法 | saved date: Tue Jul 23 2024 20:07:13 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_7_18_22_18_20_ | 2024_7_18 22_18_20 | 通义 | 原文链接失效了?试试备份 | TAGs: | saved date: Thu Jul 18 2024 22:18:20 GMT+0800 (中国标准时间) | |
_2024_7_18_22_18_06_ | 2024_7_18 22_18_06 | giscus | 原文链接失效了?试试备份 | TAGs: | saved date: Thu Jul 18 2024 22:18:06 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_7_18_16_38_13_ | 2024_7_18 16_38_13 | 通义 | 原文链接失效了?试试备份 | TAGs:AI应用 | saved date: Thu Jul 18 2024 16:38:13 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_7_12_16_43_07_ | 2024_7_12 16_43_07 | | 原文链接失效了?试试备份 | TAGs:虚拟化&容器 书 | saved date: Fri Jul 12 2024 16:43:07 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_7_4_10_40_59_ | 2024_7_4 10_40_59 | | 原文链接失效了?试试备份 | TAGs:语言 go | saved date: Thu Jul 04 2024 10:40:59 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_6_30_22_53_34_ | 2024_6_30 22_53_34 | No title | 原文链接失效了?试试备份 | TAGs: | saved date: Sun Jun 30 2024 22:53:34 GMT+0800 (中国标准时间) | |
_2024_6_30_22_51_46_ | 2024_6_30 22_51_46 | 蜗窝科技 | 原文链接失效了?试试备份 | TAGs: | saved date: Sun Jun 30 2024 22:51:46 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_6_26_19_55_03_ | 2024_6_26 19_55_03 | | 原文链接失效了?试试备份 | TAGs: | saved date: Wed Jun 26 2024 19:55:03 GMT+0800 (中国标准时间) | |
|
|
|
|
_2024_6_26_14_30_13_ | 2024_6_26 14_30_13 | Longhorn | 原文链接失效了?试试备份 | TAGs: | saved date: Wed Jun 26 2024 14:30:13 GMT+0800 (中国标准时间) | |
|
|
|
|
_2024_6_25_16_35_52_ | 2024_6_25 16_35_52 | | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Jun 25 2024 16:35:52 GMT+0800 (中国标准时间) | |
|
_2024_6_25_16_03_47_ | 2024_6_25 16_03_47 | 西安电子科技大学出版社 | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Jun 25 2024 16:03:47 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
_2024_6_25_13_06_37_ | 2024_6_25 13_06_37 | #Linux设备驱动 | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Jun 25 2024 13:06:37 GMT+0800 (中国标准时间) | |
_2024_6_25_12_59_32_ | 2024_6_25 12_59_32 | #虚拟化 | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Jun 25 2024 12:59:32 GMT+0800 (中国标准时间) | |
|
|
|
|
_2024_6_19_10_36_29_ | 2024_6_19 10_36_29 | | 原文链接失效了?试试备份 | TAGs: | saved date: Wed Jun 19 2024 10:36:29 GMT+0800 (中国标准时间) | |
|
_2024_6_17_15_16_21_ | 2024_6_17 15_16_21 | _ Hexo | 原文链接失效了?试试备份 | TAGs: | saved date: Mon Jun 17 2024 15:16:21 GMT+0800 (中国标准时间) | |
_2024_6_17_09_26_03_ | 2024_6_17 09_26_03 | - 知乎 | 原文链接失效了?试试备份 | TAGs: | saved date: Mon Jun 17 2024 09:26:03 GMT+0800 (中国标准时间) | |
_2024_6_17_09_25_58_ | 2024_6_17 09_25_58 | - 知乎 | 原文链接失效了?试试备份 | TAGs: | saved date: Mon Jun 17 2024 09:25:58 GMT+0800 (中国标准时间) | |
|
|
|
|
_2024_6_14_19_42_56_ | 2024_6_14 19_42_56 | No title | 原文链接失效了?试试备份 | TAGs: | saved date: Fri Jun 14 2024 19:42:56 GMT+0800 (中国标准时间) | |
|
|
|
_2024_6_14_17_22_41_ | 2024_6_14 17_22_41 | pvh | 原文链接失效了?试试备份 | TAGs: | saved date: Fri Jun 14 2024 17:22:41 GMT+0800 (中国标准时间) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
_2024_6_11_12_02_43_ | 2024_6_11 12_02_43 | | 原文链接失效了?试试备份 | TAGs: | saved date: Tue Jun 11 2024 12:02:43 GMT+0800 (中国标准时间) | |
|
|
|
|
_linux_操作系统_ | linux 操作系统 | | 原文链接失效了?试试备份 | TAGs:操作系统 linux | saved date: Tue Dec 17 2024 14:17:26 GMT+0800 (中国标准时间) | |
|
|
|
_._./index.html_ | . ./index.html | | 原文链接失效了?试试备份 | TAGs: | hello world! | |