forked from dotnet/machinelearning
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ML.Net and Tensorflow integration demo. #3
Open
zeahmed
wants to merge
19
commits into
master
Choose a base branch
from
tensorflow
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 16 commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
7d64c20
ML.Net and Tensorflow integration demo.
zeahmed 1353772
Merge remote-tracking branch 'upstream/master' into tensorflow
zeahmed a6852ac
Got output in flatten array. Shapes and types are preloaded.
zeahmed d625f48
Added model serialization to disk using Tensorflow frozen model scheme.
zeahmed 988c36e
Added support for different types.
zeahmed 1d4c58f
Don't use TensorFlowSharp, instead carry our own subset of the TF# API
ericstj 989e61b
Bring over copy of TF# source
ericstj 6201aa8
Add license to TF# files and ifdef to make them build
ericstj b2a8016
Trim down TensorFlowSharp API
ericstj d616fe5
Enable StyleCop and fix issues
ericstj 4714fe8
Enable ML Codeanalyzer
ericstj fcc7d75
Move TensorFlow support into Microsoft.ML.Transforms
ericstj 78b8870
Put TensorFlow support under Microsoft.ML.Transforms.TensorFlow and m…
ericstj f632824
Create a generic method for creating TF tensors
ericstj 07b586b
Merge pull request #4 from ericstj/wrapTF
zeahmed a1350d1
Addressed reviewers' comments.
zeahmed 67d765c
Addressed reviewers' comments.
zeahmed 25e477c
Removed batch size as configurable param and setting it to one for now.
zeahmed cff9552
Added TensorTypeFromType from TFSharp.
zeahmed File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,211 @@ | ||
// Licensed to the .NET Foundation under one or more agreements. | ||
// The .NET Foundation licenses this file to you under the MIT license. | ||
// See the LICENSE file in the project root for more information. | ||
|
||
using System; | ||
using System.Runtime.InteropServices; | ||
using System.Text; | ||
using size_t = System.UIntPtr; | ||
|
||
#pragma warning disable MSML_GeneralName | ||
#pragma warning disable MSML_ParameterLocalVarName | ||
|
||
namespace Microsoft.ML.Transforms.TensorFlow | ||
{ | ||
/// <summary> | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Change the tabs to spaces. (in the other files from TFSharp as well). #Resolved |
||
/// This attribute can be applied to callback functions that will be invoked | ||
/// from unmanaged code to managed code. | ||
/// </summary> | ||
/// <remarks> | ||
/// <code> | ||
/// [TensorFlow.MonoPInvokeCallback (typeof (BufferReleaseFunc))] | ||
/// internal static void MyFreeFunc (IntPtr data, IntPtr length){..} | ||
/// </code> | ||
/// </remarks> | ||
internal sealed class MonoPInvokeCallbackAttribute : Attribute | ||
{ | ||
/// <summary> | ||
/// Use this constructor to annotate the type of the callback function that | ||
/// will be invoked from unmanaged code. | ||
/// </summary> | ||
/// <param name="t">T.</param> | ||
public MonoPInvokeCallbackAttribute (Type t) { } | ||
} | ||
|
||
[StructLayout (LayoutKind.Sequential)] | ||
internal struct LLBuffer | ||
{ | ||
internal IntPtr data; | ||
internal size_t length; | ||
internal IntPtr data_deallocator; | ||
} | ||
|
||
/// <summary> | ||
/// Holds a block of data, suitable to pass, or retrieve from TensorFlow. | ||
/// </summary> | ||
/// <remarks> | ||
/// <para> | ||
/// Use the TFBuffer to blobs of data into TensorFlow, or to retrieve blocks | ||
/// of data out of TensorFlow. | ||
/// </para> | ||
/// <para> | ||
/// There are two constructors to wrap existing data, one to wrap blocks that are | ||
/// pointed to by an IntPtr and one that takes a byte array that we want to wrap. | ||
/// </para> | ||
/// <para> | ||
/// The empty constructor can be used to create a new TFBuffer that can be populated | ||
/// by the TensorFlow library and returned to user code. | ||
/// </para> | ||
/// <para> | ||
/// Typically, the data consists of a serialized protocol buffer, but other data | ||
/// may also be held in a buffer. | ||
/// </para> | ||
/// </remarks> | ||
// TODO: the string ctor | ||
// TODO: perhaps we should have an implicit byte [] conversion that just calls ToArray? | ||
internal class TFBuffer : TFDisposable | ||
{ | ||
// extern TF_Buffer * TF_NewBufferFromString (const void *proto, size_t proto_len); | ||
[DllImport (NativeBinding.TensorFlowLibrary)] | ||
private static extern unsafe LLBuffer* TF_NewBufferFromString (IntPtr proto, IntPtr proto_len); | ||
|
||
// extern TF_Buffer * TF_NewBuffer (); | ||
[DllImport (NativeBinding.TensorFlowLibrary)] | ||
private static extern unsafe LLBuffer* TF_NewBuffer (); | ||
|
||
internal TFBuffer (IntPtr handle) : base (handle) { } | ||
|
||
/// <summary> | ||
/// Initializes a new instance of the <see cref="T:TensorFlow.TFBuffer"/> class. | ||
/// </summary> | ||
public unsafe TFBuffer () : base ((IntPtr)TF_NewBuffer ()) | ||
{ | ||
} | ||
|
||
/// <summary> | ||
/// Signature of the method that is invoked to release the data. | ||
/// </summary> | ||
/// <remarks> | ||
/// Methods of this signature are invoked with the data pointer and the | ||
/// lenght pointer when then TFBuffer no longer needs to hold on to the | ||
/// data. If you are using this on platforms with static compilation | ||
/// like iOS, you need to annotate your callback with the MonoPInvokeCallbackAttribute, | ||
/// like this: | ||
/// | ||
/// <code> | ||
/// [TensorFlow.MonoPInvokeCallback (typeof (BufferReleaseFunc))] | ||
/// internal static void MyFreeFunc (IntPtr data, IntPtr length){..} | ||
/// </code> | ||
/// </remarks> | ||
public delegate void BufferReleaseFunc (IntPtr data, IntPtr lenght); | ||
|
||
/// <summary> | ||
/// Initializes a new instance of the <see cref="T:TensorFlow.TFBuffer"/> by wrapping the unmanaged resource pointed by the buffer. | ||
/// </summary> | ||
/// <param name="buffer">Pointer to the data that will be wrapped.</param> | ||
/// <param name="size">The size of the buffer to wrap.</param> | ||
/// <param name="release">Optional, if not null, this method will be invoked to release the block.</param> | ||
/// <remarks> | ||
/// This constructor wraps the buffer as a the data to be held by the <see cref="T:TensorFlow.TFBuffer"/>, | ||
/// if the release parameter is null, then you must ensure that the data is not released before the TFBuffer | ||
/// is no longer in use. If the value is not null, the provided method will be invoked to release | ||
/// the data when the TFBuffer is disposed, or the contents of the buffer replaced. | ||
/// </remarks> | ||
public unsafe TFBuffer (IntPtr buffer, long size, BufferReleaseFunc release) : base ((IntPtr)TF_NewBuffer ()) | ||
{ | ||
LLBuffer* buf = (LLBuffer*)handle; | ||
buf->data = buffer; | ||
buf->length = (size_t)size; | ||
if (release == null) | ||
buf->data_deallocator = IntPtr.Zero; | ||
else | ||
buf->data_deallocator = Marshal.GetFunctionPointerForDelegate (release); | ||
} | ||
|
||
[MonoPInvokeCallback (typeof (BufferReleaseFunc))] | ||
internal static void FreeBlock (IntPtr data, IntPtr length) | ||
{ | ||
Marshal.FreeHGlobal (data); | ||
} | ||
|
||
internal static IntPtr FreeBufferFunc; | ||
internal static BufferReleaseFunc FreeBlockDelegate; | ||
|
||
static TFBuffer () | ||
{ | ||
FreeBlockDelegate = FreeBlock; | ||
FreeBufferFunc = Marshal.GetFunctionPointerForDelegate<BufferReleaseFunc> (FreeBlockDelegate); | ||
} | ||
|
||
/// <summary> | ||
/// Initializes a new instance of the <see cref="T:TensorFlow.TFBuffer"/> by making a copy of the provided byte array. | ||
/// </summary> | ||
/// <param name="buffer">Buffer of data that will be wrapped.</param> | ||
/// <remarks> | ||
/// This constructor makes a copy of the data into an unmanaged buffer, | ||
/// so the byte array is not pinned. | ||
/// </remarks> | ||
public TFBuffer (byte [] buffer) : this (buffer, 0, buffer.Length) { } | ||
|
||
/// <summary> | ||
/// Initializes a new instance of the <see cref="T:TensorFlow.TFBuffer"/> by making a copy of the provided byte array. | ||
/// </summary> | ||
/// <param name="buffer">Buffer of data that will be wrapped.</param> | ||
/// <param name="start">Starting offset into the buffer to wrap.</param> | ||
/// <param name="count">Number of bytes from the buffer to keep.</param> | ||
/// <remarks> | ||
/// This constructor makes a copy of the data into an unmanaged buffer, | ||
/// so the byte array is not pinned. | ||
/// </remarks> | ||
public TFBuffer (byte [] buffer, int start, int count) : this () | ||
{ | ||
if (start < 0 || start >= buffer.Length) | ||
throw new ArgumentException ("start"); | ||
if (count < 0 || count > buffer.Length - start) | ||
throw new ArgumentException ("count"); | ||
unsafe | ||
{ | ||
LLBuffer* buf = LLBuffer; | ||
buf->data = Marshal.AllocHGlobal (count); | ||
Marshal.Copy (buffer, start, buf->data, count); | ||
buf->length = (size_t)count; | ||
buf->data_deallocator = FreeBufferFunc; | ||
} | ||
} | ||
|
||
internal unsafe LLBuffer* LLBuffer => (LLBuffer*)handle; | ||
|
||
// extern void TF_DeleteBuffer (TF_Buffer *); | ||
[DllImport (NativeBinding.TensorFlowLibrary)] | ||
private static extern unsafe void TF_DeleteBuffer (LLBuffer* buffer); | ||
|
||
internal override void NativeDispose (IntPtr handle) | ||
{ | ||
unsafe { TF_DeleteBuffer ((LLBuffer*)handle); } | ||
} | ||
|
||
// extern TF_Buffer TF_GetBuffer (TF_Buffer *buffer); | ||
[DllImport (NativeBinding.TensorFlowLibrary)] | ||
private static extern unsafe LLBuffer TF_GetBuffer (LLBuffer* buffer); | ||
|
||
/// <summary> | ||
/// Returns a byte array representing the data wrapped by this buffer. | ||
/// </summary> | ||
/// <returns>The array.</returns> | ||
public byte [] ToArray () | ||
{ | ||
if (handle == IntPtr.Zero) | ||
return null; | ||
|
||
unsafe | ||
{ | ||
var lb = (LLBuffer*)handle; | ||
|
||
var result = new byte [(int)lb->length]; | ||
Marshal.Copy (lb->data, result, 0, (int)lb->length); | ||
|
||
return result; | ||
} | ||
} | ||
} | ||
} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did this change?