Skip to content
This repository has been archived by the owner on Apr 22, 2023. It is now read-only.

Add support for building on ARM #25641

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 30 additions & 1 deletion deps/v8/src/arm/assembler-arm.cc
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,35 @@ bool CpuFeatures::initialized_ = false;
unsigned CpuFeatures::supported_ = 0;
unsigned CpuFeatures::found_by_runtime_probing_ = 0;

#ifdef __arm__

bool OS::ArmCpuHasFeature(CpuFeature feature) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why defining these functions in the ARM assembler for all OSes? Wouldn't that change break ARM support on platforms other than the one for which it was specifically written?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've screwed up this part of the patch... These changes should go in deps/v8/src/platform-freebsd.cc

return false;
}

CpuImplementer OS::GetCpuImplementer() {
static bool use_cached_value = false;
static CpuImplementer cached_value = UNKNOWN_IMPLEMENTER;
if (use_cached_value) {
return cached_value;
}
cached_value = ARM_IMPLEMENTER;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does FreeBSD support querying the implementer code instead of hardcoding it?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No


use_cached_value = true;
return cached_value;
}


bool OS::ArmUsingHardFloat() {
#if defined(__ARM_PCS_VFP)
return true;
#else
return false;
#endif
}

#endif // def __arm__


// Get the CPU features enabled by the build. For cross compilation the
// preprocessor symbols CAN_USE_ARMV7_INSTRUCTIONS and CAN_USE_VFP3_INSTRUCTIONS
Expand Down Expand Up @@ -749,7 +778,7 @@ static bool fits_shifter(uint32_t imm32,
Instr* instr) {
// imm32 must be unsigned.
for (int rot = 0; rot < 16; rot++) {
uint32_t imm8 = (imm32 << 2*rot) | (imm32 >> (32 - 2*rot));
uint32_t imm8 = rot == 0 ? imm32 : ((imm32 << 2*rot) | (imm32 >> (32 - 2*rot)));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this change really needed? It seems that both shift operations would zero-fill when applied to a uint32_t, and thus when rot == 0, this operation would be equivalent to imm32 | 0, or imm32, but I may be missing something.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an upstream patch (we are using clang on FreeBSD):
https://codereview.chromium.org/979633002/

if ((imm8 <= 0xff)) {
*rotate_imm = rot;
*immed_8 = imm8;
Expand Down
6 changes: 5 additions & 1 deletion deps/v8/src/arm/cpu-arm.cc
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ void CPU::FlushICache(void* start, size_t size) {
// None of this code ends up in the snapshot so there are no issues
// around whether or not to generate the code when building snapshots.
Simulator::FlushICache(Isolate::Current()->simulator_i_cache(), start, size);
#else
#elif defined(__linux__)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would using #elif !defined(__FreeBSD__) be more appropriate here? If I understand correctly, before this change this code is compiled for all platforms where deps/v8/src/arm/cpu-arm.cc is compiled, and thus not necessarily only on Linux.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my understanding __ARM_NR_cacheflush is linux specific

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, makes sense 👍

// Ideally, we would call
// syscall(__ARM_NR_cacheflush, start,
// reinterpret_cast<intptr_t>(start) + size, 0);
Expand Down Expand Up @@ -103,6 +103,10 @@ void CPU::FlushICache(void* start, size_t size) {
: "0" (beg), "r" (end), "r" (flg), "r" (__ARM_NR_cacheflush)
: "r3");
#endif
#elif defined(__FreeBSD__)
__clear_cache(start, reinterpret_cast<char*>(start) + size);
#else
#error "No cache flush implementation on this platform"
#endif
}

Expand Down
4 changes: 3 additions & 1 deletion deps/v8/src/atomicops.h
Original file line number Diff line number Diff line change
Expand Up @@ -160,8 +160,10 @@ Atomic64 Release_Load(volatile const Atomic64* ptr);
#elif defined(__GNUC__) && \
(defined(V8_HOST_ARCH_IA32) || defined(V8_HOST_ARCH_X64))
#include "atomicops_internals_x86_gcc.h"
#elif defined(__GNUC__) && defined(V8_HOST_ARCH_ARM)
#elif defined(__GNUC__) && defined(__linux__) && defined(V8_HOST_ARCH_ARM)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code in atomicops_internals_arm_gcc.h is really linux/arm specific

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, sounds good 👍

#include "atomicops_internals_arm_gcc.h"
#elif defined(__FreeBSD__) && defined(V8_HOST_ARCH_ARM)
#include "atomicops_internals_generic_gcc.h"
#elif defined(__GNUC__) && defined(V8_HOST_ARCH_MIPS)
#include "atomicops_internals_mips_gcc.h"
#else
Expand Down
135 changes: 135 additions & 0 deletions deps/v8/src/atomicops_internals_generic_gcc.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
// Copyright 2013 Red Hat Inc. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Red Hat Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

// This file is an internal atomic implementation, use atomicops.h instead.

#ifndef V8_ATOMICOPS_INTERNALS_ARM_GCC_H_
#define V8_ATOMICOPS_INTERNALS_ARM_GCC_H_

namespace v8 {
namespace internal {

inline Atomic32 NoBarrier_CompareAndSwap(volatile Atomic32* ptr,
Atomic32 old_value,
Atomic32 new_value) {
__atomic_compare_exchange_n(ptr, &old_value, new_value, true,
__ATOMIC_RELAXED, __ATOMIC_RELAXED);
return old_value;
}

inline Atomic32 NoBarrier_AtomicExchange(volatile Atomic32* ptr,
Atomic32 new_value) {
return __atomic_exchange_n(ptr, new_value, __ATOMIC_RELAXED);
}

inline Atomic32 NoBarrier_AtomicIncrement(volatile Atomic32* ptr,
Atomic32 increment) {
return __atomic_add_fetch(ptr, increment, __ATOMIC_RELAXED);
}

inline Atomic32 Barrier_AtomicIncrement(volatile Atomic32* ptr,
Atomic32 increment) {
return __atomic_add_fetch(ptr, increment, __ATOMIC_SEQ_CST);
}

inline Atomic32 Acquire_CompareAndSwap(volatile Atomic32* ptr,
Atomic32 old_value,
Atomic32 new_value) {
__atomic_compare_exchange(ptr, &old_value, &new_value, true,
__ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
return old_value;
}

inline Atomic32 Release_CompareAndSwap(volatile Atomic32* ptr,
Atomic32 old_value,
Atomic32 new_value) {
__atomic_compare_exchange_n(ptr, &old_value, new_value, true,
__ATOMIC_RELEASE, __ATOMIC_ACQUIRE);
return old_value;
}

inline void NoBarrier_Store(volatile Atomic32* ptr, Atomic32 value) {
__atomic_store_n(ptr, value, __ATOMIC_RELAXED);
}

inline void MemoryBarrier() {
__sync_synchronize();
}

inline void Acquire_Store(volatile Atomic32* ptr, Atomic32 value) {
__atomic_store_n(ptr, value, __ATOMIC_SEQ_CST);
}

inline void Release_Store(volatile Atomic32* ptr, Atomic32 value) {
__atomic_store_n(ptr, value, __ATOMIC_RELEASE);
}

inline Atomic32 NoBarrier_Load(volatile const Atomic32* ptr) {
return __atomic_load_n(ptr, __ATOMIC_RELAXED);
}

inline Atomic32 Acquire_Load(volatile const Atomic32* ptr) {
return __atomic_load_n(ptr, __ATOMIC_ACQUIRE);
}

inline Atomic32 Release_Load(volatile const Atomic32* ptr) {
return __atomic_load_n(ptr, __ATOMIC_SEQ_CST);
}

#ifdef __LP64__

inline void Release_Store(volatile Atomic64* ptr, Atomic64 value) {
__atomic_store_n(ptr, value, __ATOMIC_RELEASE);
}

inline Atomic64 Acquire_Load(volatile const Atomic64* ptr) {
return __atomic_load_n(ptr, __ATOMIC_ACQUIRE);
}

inline Atomic64 Acquire_CompareAndSwap(volatile Atomic64* ptr,
Atomic64 old_value,
Atomic64 new_value) {
__atomic_compare_exchange_n(ptr, &old_value, new_value, true,
__ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE);
return old_value;
}

inline Atomic64 NoBarrier_CompareAndSwap(volatile Atomic64* ptr,
Atomic64 old_value,
Atomic64 new_value) {
__atomic_compare_exchange_n(ptr, &old_value, new_value, true,
__ATOMIC_RELAXED, __ATOMIC_RELAXED);
return old_value;
}

#endif // defined(__LP64__)

} // namespace internal
} // v8

#endif // GOOGLE_PROTOBUF_ATOMICOPS_INTERNALS_GENERIC_GCC_H_
12 changes: 6 additions & 6 deletions deps/v8/src/platform-freebsd.cc
Original file line number Diff line number Diff line change
Expand Up @@ -343,7 +343,7 @@ VirtualMemory::VirtualMemory(size_t size, size_t alignment)
void* reservation = mmap(OS::GetRandomMmapAddr(),
request_size,
PROT_NONE,
MAP_PRIVATE | MAP_ANON | MAP_NORESERVE,
MAP_PRIVATE | MAP_ANON,
kMmapFd,
kMmapFdOffset);
if (reservation == MAP_FAILED) return;
Expand Down Expand Up @@ -415,7 +415,7 @@ void* VirtualMemory::ReserveRegion(size_t size) {
void* result = mmap(OS::GetRandomMmapAddr(),
size,
PROT_NONE,
MAP_PRIVATE | MAP_ANON | MAP_NORESERVE,
MAP_PRIVATE | MAP_ANON,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the record, this seems to be a backport of https://codereview.chromium.org/1025823003/, is that correct?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct

kMmapFd,
kMmapFdOffset);

Expand Down Expand Up @@ -445,7 +445,7 @@ bool VirtualMemory::UncommitRegion(void* base, size_t size) {
return mmap(base,
size,
PROT_NONE,
MAP_PRIVATE | MAP_ANON | MAP_NORESERVE | MAP_FIXED,
MAP_PRIVATE | MAP_ANON | MAP_FIXED,
kMmapFd,
kMmapFdOffset) != MAP_FAILED;
}
Expand Down Expand Up @@ -690,9 +690,9 @@ static void ProfilerSignalHandler(int signal, siginfo_t* info, void* context) {
sample->sp = reinterpret_cast<Address>(mcontext.mc_rsp);
sample->fp = reinterpret_cast<Address>(mcontext.mc_rbp);
#elif V8_HOST_ARCH_ARM
sample->pc = reinterpret_cast<Address>(mcontext.mc_r15);
sample->sp = reinterpret_cast<Address>(mcontext.mc_r13);
sample->fp = reinterpret_cast<Address>(mcontext.mc_r11);
sample->pc = reinterpret_cast<Address>(mcontext.__gregs[_REG_PC]);
sample->sp = reinterpret_cast<Address>(mcontext.__gregs[_REG_SP]);
sample->fp = reinterpret_cast<Address>(mcontext.__gregs[_REG_FP]);
#endif
sampler->SampleStack(sample);
sampler->Tick(sample);
Expand Down